Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 9/52 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • When should I use MySQL compressed protocol?

    - by ento
    I've learned that MySQL can compress communication between servers and clients. Compression is used if both client and server support zlib compression, and the client requests compression. (from MySQL Forge Wiki) The most obvious pros and cons are pros: Reduced payload size cons: Increased computation time So, is compressed protocol something I should enable whenever I can afford servers with adequate specs? Are there other factors I should consider?

    Read the article

  • Tell if IIS is being asked to serve compressed pages?

    - by Graham
    Hi, I'm trying to find out if our IIS server is being asked to serve pages compressed. I'm a noob regarding a lot of this so am working my way through the issues. We're using IIS 6.0 and have correctly turned compression on. If I use Fiddler2 to analyse the HTTP requests via localhost, then Fiddler reports that the pages are compressed. If we then access the server over the network, either via its external URL or via the internal server name, Fiddler reports those pages as uncompressed. Therefore, it's logical to assume that something is getting in the way - presumably our ISA server. Our ISA administrator states that ISA is configured to allow compressed requests but what I want to do is to look at the requests coming through to IIS to see if IIS is being asked to serve pages compressed. I'm fairly convinced that our request is going to ISA, ISA is forwarding these, but not with the "compression" details - therefore IIS is not performing any compression. I've looked at the IIS logs but can't see anything obvious about the HTTP request. Is there any way I can check, on the web server itself, this sort of information? One thing that is confusing, but it may be normal, is that the Client IP making the request is not the orignal PC (i.e. mine) and not the ISA firewall, but the web server itself... Thanks

    Read the article

  • MAD method compression function

    - by Jacques
    I ran across the question below in an old exam. My answers just feels a bit short and inadequate. Any extra ideas I can look into or reasons I have overlooked would be great. Thanx Consider the MAD method compression function, mapping an object with hash code i to element [(3i + 7)mod9027]mod6000 of the 6000-element bucket array. Explain why this is a poor choice of compression function, and how it could be improved. I basically just say that the function could be improved by changing the value for p (or 9027) to an prime number and choosing an other constant for a (or 3) could also help.

    Read the article

  • How to enable gzip compression using PHP Simple HTML DOM Parser

    - by brant
    I have tried a few things to enable gzip compression using PHP Simple HTML DOM Parser but nothing has seemed to work thus far. Using ini_set I've manged to change the user agent, so I figured it might be possible to also enable gzip compression? include("simpdom/simple_html_dom.php"); ini_set('zlib.output_compression', 'On'); $url = 'http://www.whatsmyip.org/http_compression/'; $html = file_get_html($url); print $html; The website above tests it. Please let me know if I am going about this the wrong way completely.

    Read the article

  • C#/.NET: Separation of multipage tiff with compression "CCITT T.6" very slow

    - by Alex B
    I need to separate multiframe tiff files, and use the following method: public static Image[] GetFrames(Image sourceImage) { Guid objGuid = sourceImage.FrameDimensionsList[0]; FrameDimension objDimension = new FrameDimension(objGuid); int frameCount = sourceImage.GetFrameCount(objDimension); Image[] images = new Image[frameCount]; for (int i = 0; i < frameCount; i++) { MemoryStream ms = new MemoryStream(); sourceImage.SelectActiveFrame(objDimension, i); sourceImage.Save(ms, ImageFormat.Tiff); images[i] = Image.FromStream(ms); } return images; } It works fine, butt if the source was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU. Saving the images afterwards to a single file using standard compression (LZW), the separation time is under 1 second. Is there a way to speed up the process?

    Read the article

  • Best .NET Framework compression class?

    - by Jack
    Hi all Yes, I know GZipStream or DeflateStream is the common ones in .NET Framework which handle compression/decompression. I wish to have compress/decompress functions in my program, but I wish a .NET Framework C# one, not a 3rd party open source. I can't use because of those copyright restrictions in my program. GZipStream and DeflateStream are not so good. for e.g., GZipStream compress a file to 480KB while 7Zip compress the same file to the size of 57KB. Does Microsoft have other good compression methods??? Thanks

    Read the article

  • Help with dynamic range compression function (audio)

    - by MusiGenesis
    I am writing a C# function for doing dynamic range compression (an audio effect that basically squashes transient peaks and amplifies everything else to produce an overall louder sound). I have written a function that does this (I think): public static void Compress(ref short[] input, double thresholdDb, double ratio) { double maxDb = thresholdDb - (thresholdDb / ratio); double maxGain = Math.Pow(10, -maxDb / 20.0); for (int i = 0; i < input.Length; i += 2) { // convert sample values to ABS gain and store original signs int signL = input[i] < 0 ? -1 : 1; double valL = (double)input[i] / 32768.0; if (valL < 0.0) { valL = -valL; } int signR = input[i + 1] < 0 ? -1 : 1; double valR = (double)input[i + 1] / 32768.0; if (valR < 0.0) { valR = -valR; } // calculate mono value and compress double val = (valL + valR) * 0.5; double posDb = -Math.Log10(val) * 20.0; if (posDb < thresholdDb) { posDb = thresholdDb - ((thresholdDb - posDb) / ratio); } // measure L and R sample values relative to mono value double multL = valL / val; double multR = valR / val; // convert compressed db value to gain and amplify val = Math.Pow(10, -posDb / 20.0); val = val / maxGain; // re-calculate L and R gain values relative to compressed/amplified // mono value valL = val * multL; valR = val * multR; double lim = 1.5; // determined by experimentation, with the goal // being that the lines below should never (or rarely) be hit if (valL > lim) { valL = lim; } if (valR > lim) { valR = lim; } double maxval = 32000.0 / lim; // convert gain values back to sample values input[i] = (short)(valL * maxval); input[i] *= (short)signL; input[i + 1] = (short)(valR * maxval); input[i + 1] *= (short)signR; } } and I am calling it with threshold values between 10.0 db and 30.0 db and ratios between 1.5 and 4.0. This function definitely produces a louder overall sound, but with an unacceptable level of distortion, even at low threshold values and low ratios. Can anybody see anything wrong with this function? Am I handling the stereo aspect correctly (the function assumes stereo input)? As I (dimly) understand things, I don't want to compress the two channels separately, so my code is attempting to compress a "virtual" mono sample value and then apply the same degree of compression to the L and R sample value separately. Not sure I'm doing it right, however. I think part of the problem may the "hard knee" of my function, which kicks in the compression abruptly when the threshold is crossed. I think I may need to use a "soft knee" like this: Can anybody suggest a modification to my function to produce the soft knee curve?

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • What does the number after 7-zip's -m switch mean?

    - by AndreKR
    7-zip has a command line switch to set the compression method, -m followed by a number, e.g. -m0=LZMA. What does the number (0 in the example) mean? Different numbers produce slightly different compression results and performance: time 7z -m0=LZMA -mx=9 -ms=on -mmt=off real 0m2.292s user 0m2.190s sys 0m0.080s time 7z -m1=LZMA -mx=9 -ms=on -mmt=off real 0m2.405s user 0m3.240s sys 0m0.070s time 7z -m0=LZMA -mx=9 -ms=on -mmt=on real 0m1.038s user 0m1.920s sys 0m0.150s time 7z -m1=LZMA -mx=9 -ms=on -mmt=on real 0m1.187s user 0m2.800s sys 0m0.130s

    Read the article

  • mod_deflate enabled for amf?

    - by user10753
    coldfusion 8, apache 2.2 running locally on XP pro. sp 3 -- Im trying to get mod_deflate working for amf. I've seen acouple of post that mention this is possible. But I cannot seem to get it to work for myself. eg. http://wadearnold.com/blog/flash/gzip-compression-is-not-part-of-amf the compression is working for other minetypes I've added to the AddOutputFilterByType so the deflate is working correctly. ive tried the following minetypes; application/x-amf, application-x/amf, application/amf. tho application/x-amf should be the one. Basically just added the minetype to AddOutputFilterByType thats all? Am I missing a setting?

    Read the article

  • Does swf provide better compress rate than zlib for png image?

    - by Huang F. Lei
    Somebody told me that when a png image is stored in swf, it's separated to several layer, hence the alpha channel can be compressed better. Is it true? Or, once png image is imported into a swf, it's format is changed, e.g converted into bitmap data, and than compressed by swf's compress algorithm. That's, it is not in png format anymore. I don't know how swf packing its resource, please tell me if you know.

    Read the article

  • Performance affects of compressing Program Files on Windows / NTFS

    - by SRobertJames
    What are the performance affects of compressing Program Files on Windows NTFS? On a fast, multicore machine, the overhead of decompression is minimal. Machines are generally disk bound, and if you can reduce the disk load by compression, you often speed things up. (Microsoft says that the built in compression of Windows Search indexes actually improves speed for this reason.) On the other hand, Windows' virtual memory is complicated. Perhaps if files are compressed, they can't be paged out simply. And there may be other issues. In short: On a fast, multicore machine with a relatively slow disk, what performance affects will compressing Program Files have?

    Read the article

  • Looking for a C# implementation of (Pk) Zip32

    - by bukko
    I need to implement Zip32 (PK compatible) in C#. I can't just call a separate dll or exe because (1) I don't want to write the uncompressed file to disk and (2) I want to avoid the possibly that someone could wrap that library - either of these would compromise security. My ideal solution would be to find a C# implementation of the Zip32 algorithm which I could use, and just modify it so I can pass a byte array or something. Does anyone have any suggestions or (I dare but hope) examples of C# PKZip implementations?

    Read the article

  • Using tar, the entire folder structure is including, I don't want that

    - by Blankman
    I am taring a folder and for some reason the entire directory structure that preceds the folder I am tarring is included. I am doing this in a script like: 'tar czf ' + dir + '/asdf.tgz ' + dir + 'asdf/' Where dir is like: /Downloads/archive/ In the man pages, I see I can fix this but I can't get it to work. I tried: tar czf -C dir ... But now I have some kind of a file -C in my folder (which I can't seem to delete btw!). Please help!

    Read the article

  • Did You Know? I gave two presentations last week

    - by Kalen Delaney
    Even though I didn't make it to TechEd this year, it didn't mean I was quiet last week. On Wednesday, I was in Colorado, giving a talk for the new Colorado PASS User Group, which is a joint venture between 3 different existing groups from Colorado Springs, Denver and Boulder. On Saturday, I spoke at SQL Saturday #43, in Redmond on the Microsoft campus. My presence there has already been mentioned on two other blogs here at SQLBlog: Merrill Aldrich and the infamous Buck Woody . As Merrill mentioned,...(read more)

    Read the article

  • thunar-archive-plugin not working

    - by Sergio
    After I experienced serious not yet resolved performance issues with Nautilus I decided to move to XUbuntu so I installed its metapackage from Ubuntu and started using it. It turns out that the archive plugin for Thunar (provides the "Extract here" option in the contextual menu when right clicking over a compressed archive) is not working, even after I apt-get purged it and reinstalled. It simply doesn't show its options in the contextual menu. What should I do to make it work?

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

  • Compress a directory in linux

    - by user9589
    I'm trying to compress a directory and ftp it to a windows ftp. I have tried ever tar command i can find to compress a directory. It appears to be ok then I try and view it's contents using winrar. Winrar keeps telling me the file is corrupt. I have viewed other .gz or .bz2 files using winrar but for some odd reason I can't get it to work. Does anyone else have a suggestion as to something else to try. I would prefer just to have it zip the files so they have a .zip extension but even then when i try to browse it's contents both windows and winrar claim it's corrupt.

    Read the article

  • How can I maximum compress video files?

    - by EmmyS
    I received 4 .mov files from a client that they want on their mobile website via SlideShowPro. Each original file was between 200 and 400 mb. I've gotten each one down to about 30 mb using transmageddon as described here, but that's still really big for a mobile connection. Is there any way to shrink them even further? Maybe it's the settings; I used Output Format = MPEG4, Audio = AAC, Video = H264 (which is what is suggested by SlideShowPro.)

    Read the article

  • Did You Know? I gave two presentations last week

    - by Kalen Delaney
    Even though I didn't make it to TechEd this year, it didn't mean I was quiet last week. On Wednesday, I was in Colorado, giving a talk for the new Colorado PASS User Group, which is a joint venture between 3 different existing groups from Colorado Springs, Denver and Boulder. On Saturday, I spoke at SQL Saturday #43, in Redmond on the Microsoft campus. My presence there has already been mentioned on two other blogs here at SQLBlog: Merrill Aldrich and the infamous Buck Woody . As Merrill mentioned,...(read more)

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • Is it possible to modify a video codec + distribute it?

    - by Nick
    this is my first question on this particular stackexchange node, not sure if it's the most appropriate place for this question (if not, guidance to the appropriate node would be appreciated). the abstract: I'm interested in modifying existing video codecs and distributing my modded codecs in such a way as to make them easily added to a users codec library... for example to be added to their mpeg streamclip, ffmpeg etc. some details: I've had some experience modifying codecs by hacking ffmpeg source files and compiling my hacked code (so that for ex: my version of ffmpeg has a very different h.263 than yours). I'm interested now in taking these modified codecs and somehow making them easily distributable, so others could "add them" to their "libraries." Also, I realize there are some tricky rights/patent issues here, this is in part my motivation. I'm interested in the patent quagmires, and welcome any thoughts on this as well. ctx link: if it helps (to gauge where I'm coming from) here's a link to a previous codec-hacking project of mine http://nickbriz.com/glitchcodectutorial/

    Read the article

  • Is it safe to compress my Windows 7 %USERPROFILE%\AppData folder?

    - by Kev
    Having just read Scott Hanselman's latest blog entry, Guide to Freeing up Disk Space under Windows 7, he suggests turning on NTFS compression which I already do for a number of less travelled folders that contain static files such as downloads or images. However I am wondering if it's wise to turn on NTFS compression for the whole of my %USERPROFILE%\AppData folder? My system drive is a 128 GB SSD residing in a Dell Precision T5400 3Ghz Quad Core Xeon workstation so I ought not to notice the extra cycles used to compress and decompress files on their way to and from the disk. However would there be any good reasons not to do this? In fact could I safely compress the whole of my %USERPROFILE% folder?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >