Search Results

Search found 2696 results on 108 pages for 'compression formats'.

Page 15/108 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • What file formats and conventions should I support to make my game engine artist-friendly?

    - by Avi
    I'm writing a game engine, and I want to know what I should do to make it more artist-friendly. I don't want to be too limiting in terms of what file formats I support, etc. Some specific questions: Are there specific formats artists like to model in? Does it not matter because the 3D modeler abstracts the data storage away? Is it okay if I don't support per-vertex coloration in my game engine? If I have to store a diffuse, specular, ambient, and emissive color value for each vertex, it doubles the size of vertices in the buffer. Is it reasonable to ask artists to do all these things in textures / maps? Any other tips you have about making it so that artists have to adapt their style to my specific engine as little as possible would be nice.

    Read the article

  • Why is IIS7 not compressing my static files?

    - by Peter Evjan
    I am trying to get IIS to compress jquery.js (and all other static files, but using jquery as the example here) on my localhost, but something goes wrong. The funny part is that when I look in my %SystemDrive%\inetpub\temp\IIS Temporary Compressed Files\MySiteName, I see the jquery.js file there, and its size is 24 KB. But in the browser, according to the Net tab on Firebug, the size is 69 kb. I've tried the following: - Checked that my browser accept compression. I found "Accept-Encoding gzip, deflate" in the request header via Firebug - Enabling Failed Request Tracing. Nothing turns up in the %SystemDrive%\inetpub\logs\FailedReqLogFiles folder after I do my request though.

    Read the article

  • For large files compress first then transfer or rsync -z? which would be fastest?

    I have a ton of relativity small data files but they take up about 50 GB and I need them transferred to a different machine. I was trying to think of the most efficient way to do this. Thoughts I had were to gzip the whole thing then rsync it and decompress it, rely on rsync -z for compression, gzip then use rsync -z. I am not sure which would be most efficient since I am not sure how exactly rsync -z is implemented. Any ideas on which option would be the fastest?

    Read the article

  • What do the numbers 240 and 360 mean when downloading video? How can I tell which video is more compressed?

    - by DaMing
    I have downloaded some computer science lectures from YouTube recently. There is usually more than one choice of file size and file format to download. I noticed that for the same video, the downloadable one with FLV 240 extension is larger than another one with MPEG4 360 extension. What does the number (240 and 360) mean? And which file's compression rate is bigger? That is to say, which one removed much more file elements than the other from the orignal file?

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Scanned JPEGs are large and slow to load - can they be optimized losslessly?

    - by Alistair Knock
    I have hundreds of JPEG photographs which were scanned about 5 years ago from negative using a Konica Minolta DiMAGE Scan Dual IV. The dimensions are ~4500x3000, and the filesize is around 12Mb, compared to shots from a DSLR with dimensions of 3000x2300 and filesize of 2-4Mb (actually, these are the output from a RAW convertor). The filesize is obviously quite a big difference, but the issue that's bothering me is that the (perceived) loading time is at least 10 times slower. Is this size/speed discrepancy likely to be because the scanner software saved the JPEGs inefficiently / using an old compression format, or is it simply that the scanned negatives contain much more "detail" (in the form of grain/noise) than the digital images? If the former, is there a way to losslessly optimize them? I've tried re-exporting the scanned files to full size JPEG from my RAW software but the filesize is pretty much the same. Both files will have been saved at 100 quality.

    Read the article

  • To decide where to crop an image, how can I highlight it's most compressible areas?

    - by Umber Ferrule
    I'm looking to get the most compression out of each of the most popular image formats, such as, JPEG, PNG, GIF, etc. Ideally, this would be a tool, or a series of transforms that could be performed (perhaps using a macro and then discarded) in popular image editors (Paint.NET/PaintShopPro/PhotoShop/GIMP) to highlight areas which will compress less. Alternatively, what rules of thumb can be used other than reducing colours (for PNG/GIF), reducing image dimensions, avoiding high detail areas... I'm not asking for help deciding what format to use for a particular image type as I think this is fairly common knowledge, i.e. diagram and images with few colours = PNG/GIF, photographs = JPEG.

    Read the article

  • Should you archive documents before backing up to the cloud?

    - by gabbsmo
    I'm planning to add a cloud storage to my personal backup strategy. But now I wonder if it really is worth the trouble of compressing my documents and photos. The Open XML-format already have zip-compression and JPEG is a lossy image format. So there really isn't much benefit in compressing. 20Mb of documents become about 17Mb at the ULTRA preset of 7-zip. One benefit I can imagine is that you can shorten upload time by archiving the folders since it minimizes the number of requests that is needed to be sent to the server at upload and download. So what are your thoughts and experience in this issue? Should you archive your documents before backing them up to the cloud?

    Read the article

  • Compress snapshot backups with duplicates

    - by Usavich
    I have a set of backups of mostly photos. The directory looks sort of like this: backup/Day1/photos/1.jpg .../2.jpg backup/Day2/photos/2.jpg .../3.jpg .../4.jpg backup/DayN/photos/2.jpg .../3.jpg .../9.jpg Files with same name are identical. There are many duplicates. Due to the way the backup system works, it's not possible to create incremental backup directly. I always get the entire dump each day. If I want to create a compressed archive for a date range, say Day 5~9, what is the best tool/compression algorithm to do that?

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • Best video codec to store my own collection

    - by Jack
    Hello! I think this question has already been asked but with different flavours. My problem resised in the fact that my camera (Canon G9) creates video with almost raw codec (I think it's plain old MPEG) so a 10 minutes video is almost 900mb. I would like to convert them in a format that has a good trade-off between space and quality, but I would prefer having the quality as good as the original (of course this is not possible because of lossy compression) just saving as much space is possible with a minimal lose of quality. Which codec should I look for? H264? It seems to be the champion of the moment.. otherwise which other ones could I try? XviD? Which parameters should I use? I mean how many kbits/s is a fair good bitrate to keep high quality? And what about audio codec? video specs are 640x480 at 30fps or 1024x768 at 15fps.. thanks in advance!

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • Optimized .htaccess???

    - by StackOverflowNewbie
    I'd appreciate some feedback on the compression and caching configuration below. Trying to come up with a general purpose, optimized compression and caching configuration. If possible: Note your PageSpeed and YSlow grades Add configuration to your .htaccess Clear your cache Note your PageSpeed and YSlow grades to see if there are any improvements (or degradations) NOTE: Make sure you have appropriate modules loaded. Any feedback is much appreciated. Thanks. # JavaScript MIME type issues: # 1. Apache uses "application/javascript": http://svn.apache.org/repos/asf/httpd/httpd/branches/1.3.x/conf/mime.types # 2. IIS uses "application/x-javascript": http://technet.microsoft.com/en-us/library/bb742440.aspx # 3. SVG specification says it is text/ecmascript: http://www.w3.org/TR/2001/REC-SVG-20010904/script.html#ScriptElement # 4. HTML specification says it is text/javascript: http://www.w3.org/TR/1999/REC-html401-19991224/interact/scripts.html#h-18.2.2.2 # 5. "text/ecmascript" and "text/javascript" are considered obsolete: http://www.rfc-editor.org/rfc/rfc4329.txt #------------------------------------------------------------------------------- # Compression #------------------------------------------------------------------------------- <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/atom+xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/xml # The following MIME types are in the process of registration AddOutputFilterByType DEFLATE application/xslt+xml AddOutputFilterByType DEFLATE image/svg+xml # The following MIME types are NOT registered AddOutputFilterByType DEFLATE application/mathml+xml AddOutputFilterByType DEFLATE application/rss+xml # Deal with JavaScript MIME type issues AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddOutputFilterByType DEFLATE text/ecmascript AddOutputFilterByType DEFLATE text/javascript </IfModule> #------------------------------------------------------------------------------- # Expires header #------------------------------------------------------------------------------- <IfModule mod_expires.c> # 1. Set Expires to a minimum of 1 month, and preferably up to 1 year, in the future # (but not more than 1 year as that would violate the RFC guidelines) # 2. Use "Expires" over "Cache-Control: max-age" because it is more widely accepted ExpiresActive on ExpiresByType application/pdf "access plus 1 year" ExpiresByType application/x-shockwave-flash "access plus 1 year" ExpiresByType image/bmp "access plus 1 year" ExpiresByType image/gif "access plus 1 year" ExpiresByType image/jpeg "access plus 1 year" ExpiresByType image/png "access plus 1 year" ExpiresByType image/svg+xml "access plus 1 year" ExpiresByType image/tiff "access plus 1 year" ExpiresByType image/x-icon "access plus 1 year" ExpiresByType text/css "access plus 1 year" ExpiresByType video/x-flv "access plus 1 year" # Deal with JavaScript MIME type issues ExpiresByType application/javascript "access plus 1 year" ExpiresByType application/x-javascript "access plus 1 year" ExpiresByType text/ecmascript "access plus 1 year" ExpiresByType text/javascript "access plus 1 year" # Probably better to explicitly declare MIME types than to have a blanket rule for expiration # Uncomment below if you disagree #ExpiresDefault "access plus 1 year" </IfModule> #------------------------------------------------------------------------------- # Caching #------------------------------------------------------------------------------- <IfModule mod_headers.c> <FilesMatch "\.(bmp|css|flv|gif|ico|jpg|jpeg|js|pdf|png|svg|swf|tif|tiff)$"> Header add Cache-Control "public" Header unset ETag Header unset Last-Modified FileETag none </FilesMatch> </IfModule>

    Read the article

  • NTFS-compressing Virtual PC disks (on host and/or guest)

    - by nlawalker
    I'm hoping someone here can answer these definitively: Does putting a VHD file in an NTFS-compressed folder on the host improve performance of the virtual machine, diminish performance, or neither? What about using NTFS compression within the guest? Does using compresssion on either the host or the guest lead to any problems like read or write errors? If I were to put a VHD in a compressed folder on the host, would I benefit from compacting it? I've seen references to using NTFS compression on quite a few VPC "tips and tricks" blog posts, and it seems like half of them say to never do it and the other half say that not only does it save disk space but it actually can improve performance if you have a fast CPU and your primary performance bottleneck is the disk.

    Read the article

  • logrotate compress files after the postrotate script

    - by Thomas
    I have an application generating a really heavy big log file every days (~800MB a day), thus I need to compress them but since the compression takes time, I want that logrotate compress the file after reloading/sending HUP signal to the application. /var/log/myapp.log { rotate 7 size 500M compress weekly postrotate /bin/kill -HUP `cat /var/run/myapp.pid 2>/dev/null` 2>/dev/null || true endscript } Is it already the case that the compression takes place after the postrotate (which would be counter-intuitive)? If not Can anyone tell me if it's possible to do that without an extra command script (an option or some trick)? Thanks Thomas

    Read the article

  • IIS7 dynamic_compression_not_success Reason 12

    - by Peter Oehlert
    So, I'm a bit of an IIS7 n00b but I've used most of the old IIS systems going back to 3. I'm trying to turn on dynamic compression and it's working, mostly. It doesn't work for my ADO.Net Data Service (Astoria) requests, batched or not. I found the freb tracing which was really helpful. And what I come up with unbatched requests is that it returns Reason Code 12, NO_MATCHING_CONTENT_TYPE. OK, so I don't have the matching mime type specified, that's easy. Except this is what I have in my web.config (which I think is correct, but maybe not). <httpCompression dynamicCompressionDisableCpuUsage="100" dynamicCompressionEnableCpuUsage="100" noCompressionForHttp10="false" noCompressionForProxies="false" noCompressionForRange="false" sendCacheHeaders="true" staticCompressionDisableCpuUsage="100" staticCompressionEnableCpuUsage="100"> <dynamicTypes> <clear/> <add mimeType="*/*" enabled="true" /> </dynamicTypes> <staticTypes> <clear/> <add mimeType="*/*" enabled="true" /> </staticTypes> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="false" /> Now I think that this means it should compress any request that includes the Accept:Gzip header. I'd love to know what others might think here. My fiddler trace: GET /SecurityDataService.svc/GetCurrentAccount HTTP/1.1 Accept-Charset: UTF-8 Accept-Language: en-us dataserviceversion: 1.0;Silverlight Accept: application/atom+xml,application/xml maxdataserviceversion: 1.0;Silverlight Referer: http://sdev03/apptestpage.aspx Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; InfoPath.2; .NET CLR 3.0.30729; OfficeLiveConnector.1.4; OfficeLivePatch.1.3) Host: sdev03 Connection: Keep-Alive Cookie: .ASPXAUTH=<snip> HTTP/1.1 200 OK Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Server: Microsoft-IIS/7.0 DataServiceVersion: 1.0; X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 22 Mar 2010 22:29:06 GMT Content-Length: 2726 <?xml version="1.0" encoding="utf-8" standalone="yes"?> *** <snip> removed ***

    Read the article

  • Huge email sizes when using mail merge in Word 2010

    - by Nic
    So I've designed an HTML template to send out some emails on. The code is fine, everything looks great there, and it tests just fantastically. I was sending out putting my recipients in the BCC field, but I decided to make it a little more personal and open the file in Word and do an email merge. The HTML file itself is 3.06kb and contains an img src to an absolute URL, which is about 125kb (a little large, I know, but it's very important). When I merge the file from Word 2010 - Outlook 2010, the email size jumps to about 250kb. It's not much, I know, but I'm a gigantic nerd and I'm stuck thinking it should be about 5kb with MIME overhead. Here's the file list on one of the test emails: File Size image001.png 104366 image002.gif 43 MESSAGE 1259 Mime.822 152575 TEXT.htm 5712 Since the img src is specified, I'm not sure why these are coming through. If this is an issue inherent to Outlook, I'd be happy to explore other options.

    Read the article

  • Emails intended as HTML are received as plain text

    - by Jeremy
    I'm regularly receiving emails from a well-known public website that read as plain text without carriage breaks or effective hyperlinks. My email client is Thunderbird. Thunderbird helpsite doesn't display an answer. And I'm reluctant to complain to the website if the problem is at my end. Message source for headers includes this: Content-Type: multipart/alternative; boundary=--boundary_9338_03b8c925-816e-4b55-95c4-b2593da7e5f6 The content in message source that follows the header is preceded by this: ----boundary_9338_03b8c925-816e-4b55-95c4-b2593da7e5f6 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 The content itself in message source reads typically like this: PCFkb2N0eXBlIGh0bWwgcHVibGljICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9u YWwvL0VOIj4NCg0KDQo8aHRtbD4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVu, etc.,etc. And, as I've said, the message in the viewing pane is unadulterated plain text. Can you tell me - where is it all going wrong? Thanks.

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Is there a way to initialize ImageKit's IKSaveOptions to default to TIFF with LZW compression?

    - by Rei
    I'm using Mac OS X 10.6 SDK ImageKit's IKSaveOptions to add the file format accessory to an NSSavePanel using: - (id)initWithImageProperties:(NSDictionary *)imageProperties imageUTType:(NSString *)imageUTType; and - (void)addSaveOptionsAccessoryViewToSavePanel:(NSSavePanel *)savePanel; I have tried creating an NSDictionary to specify Compression = 5, but I cannot seem to get the IKSaveOptions to show Format:TIFF, Compression:LZW when the NSSavePanel first appears. I've also tried saving the returned imageProperties dictionary and the userSelection dictionary, and then tried feeding that back in for the next time, but the NSSavePanel always defaults to Format:TIFF with Compression:None. Does anyone know how to customize the default format/compression that shows up in the accessory view? I would like to default the save options to TIFF/LZW and furthermore would like to restore the user's last file format choice for next time. I am able to control the file format using the imageUTType (e.g. kUTTypeJPEG, kUTTypePNG, kUTTypeTIFF, etc) but I am still unable to set the initial compression option for TIFF or JPEG formats. Thanks, -Rei

    Read the article

  • GDC 2012: DXT is NOT ENOUGH! Advanced texture compression for games

    GDC 2012: DXT is NOT ENOUGH! Advanced texture compression for games (Pre-recorded GDC content) Tired of fighting to fit your textures on disk? Too many bad reviews on long download times? Fix it! Don't settle for putting your raw DXT files in a ZIP, instead, compress your DXT textures by an extra 50%-70%! This talk will cover various ways to increase the compression of your game textures to allow for smaller distributables without introducing error, and allowing for fast on-demand decompression at run time. We'll cover how to losslessly squeeze your data with Huffman, block expansion, vector quantization, and we'll even take a look at what MegaTexture is doing too. If you've ever fought to fit textures into memory, this is the talk for you. Speaker: Colt McAnlis From: GoogleDevelopers Views: 1132 21 ratings Time: 33:05 More in Science & Technology

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • Zlib compression in boost::iostreams not compatible with zlib.NET

    - by Johan
    Hello, I want to send compressed data between my C# to a C++ application in ZLIB format. In C++, I use the zlib_compressor/zlib_decompressor available in boost::iostreams. In C#, I am currently using the ZOutputStream available in the zlib.NET library. First of all, when I compress the same data using both libraries, the results look different: boost::iostreams::zlib_compressor: FF 13 49 48 00 00 01 00 01 00 00 00 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 zlib.NET (zlib.ZOutputStream): FF 13 49 48 00 00 01 00 01 00 00 00 78 9C 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 4F 31 63 8D (Note the 78 9C pattern that is present in zlib.NET, but not in boost). Furthermore, when I decompress data in boost that I compressed in zlib.NET, I am not able to read from the stream suggesting something is wrong. It does work when I try to decompress data compressed in boost. Does anybody know what is going wrong? Thank you, Johan

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >