Search Results

Search found 20135 results on 806 pages for 'image compression'.

Page 95/806 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • bsdtar : jcf and --use-compress-program-bzip2 produce different files

    - by Valerio Schiavoni
    These two commands produce files that are slightly different in size. In particular: tar --use-compress-program=pbzip2 -cf old_logs.tbz2 1tree_* 4tree_* 8tree_* The file old_logs.tbz2 is 100557548 bytes large. The command: tar jcf old_logs.tbz 1tree_* 4tree_* 8tree_* produce the file old_logs.tbz that is 98783046 bytes large. Where the difference in the two files originate from ? I'm using bsdtar 2.8.3 - libarchive 2.8.3 on mac osx 10.8.5.

    Read the article

  • Decompressing Files on an NTFS Volume from Linux

    - by amphetamachine
    I recently did something stupid on my dual-boot laptop, where I compressed the entire volume to make room for a Linux partition. For some reason, Windows let me compress C:\ntldr. Now I need to get it uncompressed in order for Windows to boot. Here are some of the operating restrictions I have: I do not have access to the BIOS. I cannot boot from CD/USB/floppy. (I installed Linux through PXE) It does not have network access. Is there were some way to specify that the ntfs-3g driver shouldn't compress files even if it thinks it should (if the directory is compressed) when mounting the volume? Or, is there a way to modify the attributes of a directory using ntfstools?

    Read the article

  • Is there a way to determine the original size or file count of a 7-zip archive?

    - by Zac B
    I know that when I compress an archive with the 7za utility, it gives me stats like the number of files processed and the amount of bytes processed (the original size of the data). Is it possible, using the commandline (on linux) or some programming language, to determine: the original size of an archive, before it was compressed? the number of files/directories contained within an archive? The answer might be "no, just decompress the whole archive and do counting/sizing then", but it would be useful to know if there was a faster/less space-greedy way.

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • Saving a compressed text attachment results in empty file

    - by Brandon
    I have a text document with compressed text in it, the text is auto generated by a program. The text document is fine on my machine (Vista 32-bit), and can be used normally. The other person can also create and use these files just fine. (XP 32-bit) However when I email it to someone else (Outlook 2003 on both machines) the attachment is sent fine (5kb) but when the other person tries to save it somewhere, the saved file is empty. (64b) At first I thought Outlook didn't like compressed text files (security risk maybe?), but I can receive the text files just fine. Is there a setting somewhere on the other persons machine that tells Outlook not to trust compressed text? Can anyone think of a reason why these files are being saved as empty text documents?

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • How can I compress a movie to a specific file size in Windows 7's Live Movie Maker?

    - by Nathan Fellman
    In previous versions of Windows Movie Maker I could take a raw video file and specify the file size to compress it to, and Movie Maker would compress it accordingly (with the appropriate loss in quality). Live Movie Maker, which comes with Windows 7, doesn't seem to have this option. I can only set specify the requested quality. Is there any way to specify the size of the target file for Windows Live Movie Maker?

    Read the article

  • unable to decompress a *.tar.xz file

    - by neubert
    Per http://askubuntu.com/a/107976 I tried tar xf php-5.6.0RC4.tar.xz and tar -xJf php-5.6.0RC4.tar.xz and in both cases I get the following: tar (child): xz: Cannot exec: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now Here's php-5.6.0RC4.tar.xz: http://downloads.php.net/tyrael/php-5.6.0RC4.tar.xz I'm running Ubuntu 14.04 LTS.

    Read the article

  • Does gunzip work in memory or does it write to disk?

    - by Ryan Detzel
    We have our log files gzipped to save space. Normally we keep them compressed and just do gunzip -c file.gz | grep 'test' to find important information but we're wondering if it's quicker to keep the files uncompressed and then do the grep. cat file | grep 'test' There has been some discussions about how gzip works if it would make sense that if it reads it into memory and unzips then the first one would be faster but if it doesn't then the second one would be faster. Does anyone know how gzip uncompresses data?

    Read the article

  • Apache2 and logrotate: delaycompress needed?

    - by j0nes
    Hello, I am currently looking at the file size of my Apache logs as they became huge. In my logrotate conf, I have delaycompress enabled. Does Apache really need this (as the logrotate documentation says that some programs still write in the old file) or is it safe to disable delaycompress? Best regards, Jonas

    Read the article

  • Too big a difference between Loud and Quiet when watching DVD's... constantly have to adjust volume.

    - by Dan
    It is hard for me to find the right volume for my computer to watch my dvd's on because it seems like most reasonable volumes become overwhelming at the loudest parts of a movie and it is hard to even make out the dialog at the quietest parts. I find I'm constantly adjusting the volume during the course of a movie. Are there ways to make the difference between the louds and the quiets not so extreme? (both computer related solutions and non-computer related solutions welcome). Like moving my speakers across the room and increasing the volume? or the opposite? Or would would the extremes be less noticeable if I used headphones? Are there movie players that might have more complex sound adjustment features? If there is a software solution out there for linux that would be great too. Thanks, Dan

    Read the article

  • tar: How to create a tar file with arbitrary leading directories w/o 'cd'ing to parent dir

    - by Yan
    Say I have a directory of files at /home/user1/dir1 and I want to create a tar with only "dir1" as the leading directory: /dir1/file1 /dir1/file2 I know I can first cd to the directory cd /home/user1/ tar czvf dir1.tar.gz dir1 But when writing scripts, jumping from directory to directory isn't always favorable. I am wondering is there a way to do it with absolute paths without changing current directories? I know I can always create a tar file with absolute paths INSIDE and use --strip-components when extracting but sometimes extra path names are extra private information that you don't want to distribute with your tar files. Thanks!

    Read the article

  • Encoding Uncompressed video

    - by Sakamoto Kazuma
    I'd like to encode uncompressed video into compressed avi or mpeg4. Was wondering what program I should look at getting to do such a task. Videos are between 3 and 20 minutes long, and range anywhere from 1.3 to 10 GBs in uncompressed .avi form (fraps).

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • How can I evaluate the best choice of archive format for compressing files?

    - by Mehrdad
    In general, I've observed the following: Linux-y files or tools use bzip2 or gzip for distributing archives Windows-y files or tools use ZIP for distributing archives Many people use 7-Zip for creating and distributing their own archives Questions: What are the advantages and disadvantages of these formats, all of which appear to be open formats? When/why should I choose one (say, 7-Zip) over another (say, ZIP)? Why does the trend above appear to hold, even though all of these are portable formats? Are there any particular advantages to using a particular archive format on a particular platform?

    Read the article

  • How to split file on Windows 2003 using MS supported tool

    - by Rune
    Hi, Is it possible to split a large file into smaller files on Windows 2003 using a tool provided/supported/sanctioned by Microsoft? I see that there are a lot of freeware tools (various zip tools) for this task, but I need to move files off of a production server, thus would like to avoid tools I don't know if I can trust. I would much prefer some tool included in the Windows Server 2003 Resource Kit Tools or something along those lines. Does such a tool exist? Thank you.

    Read the article

  • Multithreaded TIFF to PNG conversion

    - by mtone
    I'm working with images of scanned books, so hundreds of high resolution pictures. I'm doing conversion work with Photoshop Elements - I can quickly save them to uncompressed TIFF, but converting to compressed PNG using a single thread takes ages. Do you know a software, ideally simple and free, that would batch convert those TIFFs to PNG in a multi-threaded manner (4 to 8 simultaneous files) to take advantage of those cores and cut converting times? I'm not too worried in slight variations in final size.

    Read the article

  • CRC error when extracting to SSD from 2nd HDD

    - by gbn
    Hello I have a large RAR file (split up) containing an ISO on my 2nd HDD When I extract it: to the same HDD, it's OK to the system/OS SSD, I get CRC errors I've checked memory, run memtests, checked wires etc I have no other issues; only with this one RAR file Any ideas please?

    Read the article

  • Minimize HTML, CSS and JS files

    - by karmic
    How do you automatically pack/minimize the HTML, CSS and JS files served on a webpage. More specifically, I wish to have this for a wordpress website. Should it be done at the webserver level (lighttpd), at the application level (wordpress), at the PHP level, or somewhere else?

    Read the article

  • Three formats? Why?

    - by Yar
    I needed to download the Ruby Source recently from here and it says, "available in three formats" which are .tar.bz2, .tar.gz and .zip. Is there any reason that we need all three formats? At least on Linux and OSX I can do any of the three easily. On Windows, only zip is built-in, I think. Is there anything behind these preferences or is this just a religious battle?

    Read the article

  • gunzip unexpected end of file

    - by Mark J Seger
    I like to write high volume data directly to a zip file via perl. Works like a champ! However, on some rare occasions I've found that gunzip can't uncompress them declaring an unexpected end-of-file. The thing is I can uncompress it with a simple perl script, which probably misses the corrupted record(s) at the end. My question is, does anyone know of a way to use a standard utility to do the same thing? Maybe with a 'compress-as-much-as-you-can' switch? -mark

    Read the article

  • Looking for Unix tool/script that, given an input path, will compress every batch of uncompressed 100MB text files into a single gzip file

    - by newToFlume
    I have a dump of thousands of small text files (1-5MB) large, each containing lines of text. I need to "batch" them up, so that each batch is of a fixed size - say 100MB, and compress that batch. Now that batch could be: A single file that is just a 'cat' of the contents of the individual text files, or Just the individual text files themselves Caveats: unix split -b will not work here as I need to keep lines of text intact. Using the lines option is a bit complicated as there is a large variance in the number of bytes in each line. The files need not be a fixed size strictly, as long as it's within 5% of the requested size The lines are critical, and should not be lost: I need to confirm that the input made its way to output without loss - what rolling checksum (something like CRC32, BUT better/"stronger" in face of collisions) A script should do nicely, but this seems like a task someone has done before, and it would be nice to see some code (preferably python or ruby) that does atleast something similar.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >