Search Results

Search found 2696 results on 108 pages for 'compression formats'.

Page 19/108 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Best method to compress JSON string in term of performance and compress radio

    - by Eric Yin
    For a JSON string, contains all kinds of settings, numbers, string etc. Total JSON string fairly fall into 10k~50K range. I want to compress it before save to database. So I wonder which compress method should I choose, I am using c# 4, I know I can choose gzip and deflate but the compression radio is not good (although speed is good). More specific, compress can be a little slow (since only once) but should be small. Decompress should be lighting fast since decompress happens lots. Please give some advice.

    Read the article

  • Compressing as GZip WCF requests (SOAP and REST)

    - by Joannes Vermorel
    I have a .NET 3.5 web app hosted on Windows Azure that exposes several WCF endpoints (both SOAP and REST). The endpoints typically receive 100x more data than they serve (lot of data is upload, much fewer is downloaded). Hence, I am willing to take advantage from HTTP GZip compression but not from the server viewpoint, but rather from the client viewpoint, sending compressed requests (returning compressed responses would be fine, but won't bring much gain anyway). Here is the little C# snippet used on the client side to activate WCF: var binding = new BasicHttpBinding(); var address = new EndpointAddress(endPoint); _factory = new ChannelFactory<IMyApi>(binding, address); _channel = _factory.CreateChannel(); Any idea how to adjust the behavior so that compressed HTTP requests can be made?

    Read the article

  • How does git save space and is fast at the same time?

    - by eSKay
    I just saw the first git tutorial at http://blip.tv/play/Aeu2CAI How does git store all the versions of all the files and still be more economical in space than subversion which saves only the latest version of the code? I know this can be done using compression but that would be at the cost of speed, but this also says that git is much faster (though where is gains the max is the fact that most of its operations are offline). So, my guess is that git compresses data extensively it is still faster because uncompression + work is still faster than network_fetch + work Am I correct? even close?

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • How to efficiently deal with a large amount of HTML5 canvas pixel data over websockets

    - by user730569
    Using imageData = context.getImageData(0, 0, width, height); JSON.stringify(imageData.data); I grab the pixel data, convert it to a string, and then send it over the wire via websockets. However, this string can be pretty large, depending on the size of the canvas object. I tried using the compression technique found here: JavaScript implementation of Gzip but socket.io throws the error Websocket message contains invalid character(s). Is there an effective way to compress this data so that it can be sent over websockets?

    Read the article

  • How can I compress jpeg images in Java without losing any metadata in that image?

    - by guitarpoet
    I want compress jpeg files using Java. I do it like this: Read the image as BufferedImage Write the image to another file with compression rate. OK, that seems easy, but I find the ICC color profile and the EXIF information are gone in the new file and the DPI of the image is dropped from 240 to 72. It looks different from the origin image. I use a tool like preview in OS X. It can perfectly change the quality of the image without affecting other information. Can I done this in Java? At least keep the ICC color profile and let the image color look the same as the origin photo?

    Read the article

  • Compressing digitalized document images

    - by Adabada
    Hello, We are now required by law to digitalize all the financial documents in our company and submit them to evaluations every 3 months. Since this is sensitive data we decided to take matters into our own hands and build some sort of digital data archiver. The tool works perfectly, but after 7 months of usage we are begining to worry about the disk space used by these images. Here some info on the amount of documents digitalized: 15K documents scanned and archived per day, with final PNG size of +- 860KB: 15 000 * 860 kilobits = 1.53779984 gigabytes 30 days of work per month: 1.53779984 gigabytes * 30 = 46.1339952 gigabytes Expectation of disk space usage after 1 year: 46.1339952 gigabytes * 12 = 553.607942 gigabytes So far we're at 424 gigabytes of disk space used, without counting backup. We're using PNG as image format, but I would like to know if anyone have any advice on a better compression algorithm for images or alternative strategies for compressing the PNG's even more or even better ways to archive images as to save disk space. Any help would be appreciated, thanks.

    Read the article

  • how to compress a PNG image using Java

    - by 116213060698242344024
    Hi I would like to know if there is any way in Java to reduce the size of an image (use any kind of compression) that was loaded as a BufferedImage and is going to be saved as an PNG. Maybe some sort of png imagewriteparam? I didnt find anything helpful so im stuck. heres a sample how the image is loaded and saved public static BufferedImage load(String imageUrl) { Image image = new ImageIcon(imageUrl).getImage(); bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_INT_ARGB); Graphics2D g2D = bufferedImage.createGraphics(); g2D.drawImage(image, 0, 0, null); return bufferedImage; } public static void storeImageAsPng(BufferedImage image, String imageUrl) throws IOException { ImageIO.write(image, "png", new File(imageUrl)); }

    Read the article

  • Google présente « Courgette », son algorithme de compression différentielle pour réduire la taille des mises à jour de Chrome

    Google présente « Courgette », son algorithme de compression différentielle Utilisé pour réduire la taille des mises à jour du navigateur Chrome Pour une application qui évolue aussi vite que Google Chrome, le téléchargement des nombreuses mises à jour pourrait devenir un véritable casse-tête si les utilisateurs devaient rapatrier chaque fois l'installable du navigateur (environ 10 MO) Nombre d'entre eux renâcleraient certainement à l'idée de saturer leur connexion de mises à jour volumineuses...

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • What are the advantages and disadvantages of the various virtual machine image formats?

    - by Matt
    Xen and Virtualbox etc both support a range of different virtual machine image formats. These are: vmdk, vdi, qcow & qcow2, hdd & vhd. Without any bias toward a particular product, I'm wanting to know what are the advantages and disadvantages of the various formats both from a features perspective, robustness and speed? One piece of info I discovered in a forum post was this: "The major difference is that VDI uses relatively large blocks (1MB) when growing an image, and thus has less overhead for block pointers etc. but isn't ultimately space efficient in the sense that if a single byte is non-zero in such a 1MB block the entire space is used. VMDK in contrast uses 64K blocks, and thus has more management overhead and generally a bit less disk space consumption What offsets this is that VDI is more efficient when it comes to snapshots." You might be thinking, I want to know this because I want to know which format to choose? Not exactly, I'm developing some software which utilises these formats and want to support one or more of them. Simplicity, large disks and ease of development are my main drivers.

    Read the article

  • asp.net mvc compress stream and remove whitespace

    - by Bigfellahull
    Hi, So I am compressing my output stream via an action filter: var response = filterContext.HttpContext.Response; response.Filter = new DeflateStream(response.Filter), CompressionMode.Compress); Which works great. Now, I would also like to remove the excess whitespace present. I found Mads Kristensen's http module http://madskristensen.net/post/A-whitespace-removal-HTTP-module-for-ASPNET-20.aspx. I added the WhitespaceFilter class and added a new filter like the compression: var response = filterContext.HttpContext.Response; response.Filter = new WhitepaperFilter(response.Filter); This also works great. However, I seem to be having problems combining the two! I tried: var response = filterContext.HttpContext.Response; response.Filter = new DeflateStream(new WhitespaceFilter(response.Filter), CompressionMode.Compress); However this results in some major issues. The html gets completely messed up and sometimes I get an 330 error. It seems that the Whitespace filter write method gets called multiple times. The first time the html string is fine, but on subsequent calls its just random characters. I thought it might be because the stream had been deflated, but isnt the whitespace filter using the untouched stream and then passing the resulting stream to the DeflateStream call? Any ideas?

    Read the article

  • iis7 compress dynamic content from custom handler

    - by Malloc
    I am having trouble getting dynamic content coming from a custom handler to be compressed by IIS 7. Our handler spits out json data (Content-Type: application/json; charset=utf-8) and responds to url that looks like: domain.com/example.mal/OperationName?Param1=Val1&Param2=Val2 In IIS 6, all we had to do was put the edit the MetaBase.xml and in the IIsCompressionScheme element make sure that the HcScriptFileExtensions attribute had the custom extension 'mal' included in it. Static and Dynamic compression is turned out at the server and website level. I can confirm that normal .aspx pages are compressed correctly. The only content I cannot have compressed is the content coming from the custom handler. I have tried the following configs with no success: <handlers> <add name="MyJsonService" verb="GET,POST" path="*.mal" type="Library.Web.HttpHandlers.MyJsonServiceHandlerFactory, Library.Web" /> </handlers> <httpCompression> <dynamicTypes> <add mimeType="application/json" enabled="true" /> </dynamicTypes> </httpCompression> _ <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> _ <staticContent> <mimeMap fileExtension=".mal" mimeType="application/json" /> </staticContent> <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> Thanks in advance for the help.

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • Reconstructing trees from a "fingerprint"

    - by awshepard
    I've done my SO and Google research, and haven't found anyone who has tackled this before, or at least, anyone who has written about it. My question is, given a "universal" tree of arbitrary height, with each node able to have an arbitrary number of branches, is there a way to uniquely (and efficiently) "fingerprint" arbitrary sub-trees starting from the "universal" tree's root, such that given the universal tree and a tree's fingerprint, I can reconstruct the original tree? For instance, I have a "universal" tree (forgive my poor illustrations), representing my universe of possibilities: Root / / / | \ \ ... \ O O O O O O O (Level 1) /|\/|\...................\ (Level 2) etc. I also have tree A, a rooted subtree of my universe Root / /|\ \ O O O O O / Etc. Is there a way to "fingerprint" the tree, so that given that fingerprint, and the universal tree, I could reconstruct A? I'm thinking something along the lines of a hash, a compression, or perhaps a functional/declarative construction? Big-O analysis (in time or space) is a plus. As a for-instance, a nested expression like: {{(Root)},{(1),(2),(3)},{(2,3),(1),(4,5)}...} representing the actual nodes present at each level in the tree is probably valid, but can it be done more efficiently?

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Minimizing MySQL output with Compress() and by concatening results?

    - by johnrl
    Hi all. It is crucial that I transfer the least amount of data possible between server and client. Therefore I thought of using the mysql Compress() function. To get the max compression I also want to concatenate all my results in one large string (or several of max length allowed by MySql), to allow for similar results to be compressed, and then compress these/that string. 1st problem (concatenating mysql results): SELECT name,age FROM users returns 10 results. I want to concatenate all these results in one strign on the form: name,age,name,age,name,age... and so on. Is this possible? 2nd problem (compressing the results from above) When I have comstructed the concatenated string as above I want to compress it. If I do: SELECT COMPRESS('myname'); then it just gives me as output the character '-' - sometimes it even returns unprintable characters. How do I get COMPRESS() to return a compressed printable string that I can trasnfer in ex ASCII encoding?

    Read the article

  • ITL (iTunes Library) Format

    - by CHiRo79
    I´m developing a Java solution for manage an iTunes Library (ITL file). The ITL format is a propietary one. I'm looking for an implementation or a documentation about ITL format but Google can't find anything useful. Does anyone have experience about that? Where to find more information? Thanks in advance.

    Read the article

  • Creating gif/bmp files with flex

    - by dta
    public function bmdToStr(bmd:BitmapData,width:int,height:int):String { var encoder:JPEGEncoder = new JPEGEncoder(); var encBytes:ByteArray = encoder.encode(bmd); return ImageSnapshot.encodeImageAsBase64(new ImageSnapshot(width,height,encBytes,"image/jpeg")); } As of now, I am creating JPEG image from bitmapdata as above. I can use PNGEncoder for creating png images as well. How do I create .bmp or .gif files?

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • Difficulty determining the file type of text database file

    - by Joseph Silvashy
    So the USDA has some weird database of general nutrition facts about food, and well naturally we're going to steal it for use in our app. But anyhow the format of the lines is like the following: ~01001~^~0100~^~Butter, salted~^~BUTTER,WITH SALT~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01002~^~0100~^~Butter, whipped, with salt~^~BUTTER,WHIPPED,WITH SALT~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01003~^~0100~^~Butter oil, anhydrous~^~BUTTER OIL,ANHYDROUS~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01004~^~0100~^~Cheese, blue~^~CHEESE,BLUE~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 With those odd ~ and ^ separating the values, It also lacks a header row but thats ok, I can figure that out from the other stuff on their site: http://www.ars.usda.gov/Services/docs.htm?docid=8964 Any help would be great! If it matters we're making an open/free API with Ruby to query this data. Additionally I'm having a tough time posing this question so I've made it a community wiki so we can all pitch in!

    Read the article

  • Internet Explorer 8 + Deflate

    - by Andreas Bonini
    I have a very weird problem.. I really do hope someone has an answer because I wouldn't know where else to ask. I am writing a cgi application in C++ which is executed by Apache and outputs HTML code. I am compressing the HTML output myself - from within my C++ application - since my web host doesn't support mod_deflate for some reason. I tested this with Firefox 2, Firefox 3, Opera 9, Opera 10, Google Chrome, Safari, IE6, IE7, IE8, even wget.. It works with ANYTHING except IE8. IE8 just says "Internet Explorer cannot display the webpage", with no information whatsoever. I know it's because of the compression only because it works if I disable it. Do you know what I'm doing wrong? I use zlib to compress it, and the exact code is: /* Compress it */ int compressed_output_size = content.length() + (content.length() * 0.2) + 16; char *compressed_output = (char *)Alloc(compressed_output_size); int compressed_output_length; Compress(compressed_output, compressed_output_size, (void *)content.c_str(), content.length(), &compressed_output_length); /* Send the compressed header */ cout << "Content-Encoding: deflate\r\n"; cout << boost::format("Content-Length: %d\r\n") % compressed_output_length; cgiHeaderContentType("text/html"); cout.write(compressed_output, compressed_output_length); static void Compress(void *to, size_t to_size, void *from, size_t from_size, int *final_size) { int ret; z_stream stream; stream.zalloc = Z_NULL; stream.zfree = Z_NULL; stream.opaque = Z_NULL; if ((ret = deflateInit(&stream, CompressionSpeed)) != Z_OK) COMPRESSION_ERROR("deflateInit() failed: %d", ret); stream.next_out = (Bytef *)to; stream.avail_out = (uInt)to_size; stream.next_in = (Bytef *)from; stream.avail_in = (uInt)from_size; if ((ret = deflate(&stream, Z_NO_FLUSH)) != Z_OK) COMPRESSION_ERROR("deflate() failed: %d", ret); if (stream.avail_in != 0) COMPRESSION_ERROR("stream.avail_in is not 0 (it's %d)", stream.avail_in); if ((ret = deflate(&stream, Z_FINISH)) != Z_STREAM_END) COMPRESSION_ERROR("deflate() failed: %d", ret); if ((ret = deflateEnd(&stream)) != Z_OK) COMPRESSION_ERROR("deflateEnd() failed: %d", ret); if (final_size) *final_size = stream.total_out; return; }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >