Search Results

Search found 802 results on 33 pages for 'compressed sensing'.

Page 5/33 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Transparently decompressing data in archive to allow greater compression later

    - by Vi.
    I have, for example, filesystem image which have some compressed files (with weak compression such as gzip), for example, manpages or archives with the same uncompressed content nearby. How to pre-filter the data to "expand" compressed data to plain form (to re-compress it with strong compression) and then post-filter after decompression to restore original "semi-compressed" image? SHA-1 match is advices but not strictly required (but the resulting image must work, e.g. re-compressed files should not grow too much, be decompressible etc.) Like improving compression ratio by reversing weak compression algorithms. Are there programs for this?

    Read the article

  • Gzip compress offline?

    - by shoosh
    I've configured my site to serve compressed content by putting this line in .htaccess AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/javascript application/json This works perfectly for almost all files except a few large JSON files that are above 200Kb. For some reason they are not being compressed. I see that they don't using the net tab in firebug and the Network section in chrome. So as a workaround I thought I could compress these files offline and have Apache read them compressed. What tool should I use to compress them? is the linux gzip the one? any special flags or something I should use? What should I put in .htaccess so that the server would know to serve these files with content-encoding gzip ?

    Read the article

  • Upload image file: is compression on client side already possible?

    - by Chris
    When offering photo file uploading, usually the user will have badly compressed and huge (10+ megapixels) JPEG files from their cameras or phones. On the server side, these files will get re-compressed to something like 800x600px and JPEG quality 7 or 8. Is it (already) possible to do that re-compression on the client side? So that I would only need to transmit some 100kB (800x600px) and not 3 MB or more. Something like: (1) With javascript's new FileSystem API ( http://slides.html5rocks.com/#filewriter ) it would be possible to read the photo file's data into client side JS. (2) Then it would be necessary to re-encode the JPEG data, which is possible, but I counld not find any library for that (yet). Anybody knows such a library? (3) Last step would be to POST the re-compressed JPEG data to the server side for storage and get a URL to the stored photo file back from the server for inclusion into the client's HTML. I am looking for some jQuery plugin, other JS library or example web page that does this.

    Read the article

  • CSMA/CD and CSMA/CA

    - by Zia ur Rahman
    CSMA/CD is used in wired LANs, CSMA means that the computers on the network sense the medium if the medium is idle, the computer transmits otherwise it defers sending.CD refers to collision detection. I don’t want to write about CD because its not related to my Question. Now in case of wireless LANs we use CSMA/CA , here CSMA refers to carrier sensing , the Question is how carrier sensing is done in case of wireless LANs? the collision avoidance is done by sending the control message to the intended receipient.

    Read the article

  • Django cannot find my templatetags, even though it's in INSTALLED_APPS and has a __init__.py

    - by Vivian Short
    I just installed django-compress (http://code.google.com/p/django-compress) into /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/compress. I added 'compress' to INSTALLED_APPS. In my template file, I wrote {% load compressed %}. I got the error: 'compressed' is not a valid tag library: Could not load template library from django.templatetags.compressed, No module named compressed I verified that there is an init.py in compress, as well as in compress/templatetags/. I tried putting the compress directory into PYTHONPATH. I ran python and wrote "import compress" and that worked. Your suggestions would be very appreciated! What else can I try?

    Read the article

  • Git pack file entry format

    - by Ben Collins
    My understanding of the Git pack file format is something like: Where the table is 32-bits wide, and the first three 32-bit words are the pack file header. The last row of 32 bits are the first 4 bytes of an entry. As I understand it, the size of the entry is specified by consecutive bytes with the MSB set, followed by compressed data. In the first byte whose MSB is not set, is the MSB part of the compressed data, or is it a gap? If it's part of the compressed data, how can you guarantee that when the data is compressed that bit won't be set?

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • Problem When Compressing File using vb.net

    - by Amr Elnashar
    I have File Like "Sample.bak" and when I compress it to be "Sample.zip" I lose the file extension inside the zip file I meann when I open the compressed file I find "Sample" without any extension. I use this code : Dim name As String = Path.GetFileName(filePath).Replace(".Bak", "") Dim source() As Byte = System.IO.File.ReadAllBytes(filePath) Dim compressed() As Byte = ConvertToByteArray(source) System.IO.File.WriteAllBytes(destination & name & ".Bak" & ".zip", compressed) Or using this code : Public Sub cmdCompressFile(ByVal FileName As String) 'Stream object that reads file contents Dim streamObj As Stream = New StreamReader(FileName).BaseStream 'Allocate space in buffer according to the length of the file read Dim buffer(streamObj.Length) As Byte 'Fill buffer streamObj.Read(buffer, 0, buffer.Length) streamObj.Close() 'File Stream object used to change the extension of a file Dim compFile As System.IO.FileStream = File.Create(Path.ChangeExtension(FileName, "zip")) 'GZip object that compress the file Dim zipStreamObj As New GZipStream(compFile, CompressionMode.Compress) 'Write to the Stream object from the buffer zipStreamObj.Write(buffer, 0, buffer.Length) zipStreamObj.Close() End Sub please I need to compress the file without loosing file extension inside compressed file. thanks,

    Read the article

  • What do the numbers 240 and 360 mean when downloading video? How can I tell which video is more compressed?

    - by DaMing
    I have downloaded some computer science lectures from YouTube recently. There is usually more than one choice of file size and file format to download. I noticed that for the same video, the downloadable one with FLV 240 extension is larger than another one with MPEG4 360 extension. What does the number (240 and 360) mean? And which file's compression rate is bigger? That is to say, which one removed much more file elements than the other from the orignal file?

    Read the article

  • Does swf provide better compress rate than zlib for png image?

    - by Huang F. Lei
    Somebody told me that when a png image is stored in swf, it's separated to several layer, hence the alpha channel can be compressed better. Is it true? Or, once png image is imported into a swf, it's format is changed, e.g converted into bitmap data, and than compressed by swf's compress algorithm. That's, it is not in png format anymore. I don't know how swf packing its resource, please tell me if you know.

    Read the article

  • GZipped Images - Is It Worth?

    - by charlie
    Most image formats are already compressed. But in fact, if I take an image and compress it [gzipping it], and then I compare the compressed one to the uncompressed one, there is a difference in size, even though not such a dramatic difference. The question is: is it worth gzipping images? the content size flushed down to the client's browser will be smaller, but there will be some client overhead when de-gzipping it. Please advise.

    Read the article

  • How can I detect if an iPhone OS device has a proximity sensor?

    - by Batgar
    The docs related to proximity sensing state that if the proximity sensing APIs are used on a device without a proximity sensor (i.e. iPod touch, iPad) they will just return as if the proximity sensor has fired. Aside from checking the [[UIDevice currentDevice].model] string and parsing for "iPhone", "iPod touch", or "iPad" is there a slicker way to determine if a proximity sensor is on a given device?

    Read the article

  • Serving static content from cookie less domain and mod_deflate

    - by Saif Bechan
    I have two domain. One domain with my main website and the other with js/css/etc.. files, static content. mod_deflate is enabled for both domains, but when i run ySlow in FireFox it says none of my static content is compressed. When i bring back the js or css file to my normal domain it gets compressed right. Only when its served from the other domain is it not compressed. Do i have to do some more configuration for this to work? I am using this line in my .htaccess file AddOutputFilterByType DEFLATE application/x-javascript text/css text/html text/xml text/plain application/x-httpd-php I tried to but the line in my httpd.conf file but it gives me the same results. PS. If this is more of a serverfault question i am sorry for this. But i see a lot of questions here concerning mod_deflate and ySlow

    Read the article

  • I am trying to zip files individually, but the file type is unknown

    - by Jason Mander
    I am trying to zip some files with an unknown file type individually. I am using the following code in a batch script to do that: @ECHO OFF FOR %%A IN (bestbuy*nat*component.cpi*) DO "C:\Program Files\7-Zip\7z.exe" a -mx9 -m0=lzma2:d256m "%%~nA.7z" "%%A" The code will compress files individually ONLY if the file has an extension. Unfortunately the files that I have don't have any extension. In the code I am trying to zip files by doing a pattern match, the files are getting compressed into ONE file (which I do not want, I want each file compressed individually). Why does this code create separate zip files when the files have an extension (for example if I add .txt to the end of the files) and when there is no extension the code creates one zipped file. Can anyone please help me with the code to compress files with unknown file type so that each file gets compressed individually Your help would be greatly appreciated. Jason

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Replacing files in a folder structure with files from an unsorted folder

    - by Andrew
    I have over 50,000 PDFs organized into folders in a file called PDFACT. I needed to compress these files so I ran them through Adobe to batch compress them and this worked—except Adobe could only output the files without their folder structure. So basically I have 50,000 PDFs set up in a folder with hundreds of subfolders, and everything was organized. I ended up with one folder with 50,000 compressed PDFs in it, just in alphabetical order. Somehow I need to replace all the orginal PDFs with their compressed copies. Let me give an example: In the folder PDFACT we have the following file: C:\PDFACT\BIG DINNER\BILL\NEWESTBILL.PDF … and in the output folder that Adobe created we have just: C:\COMPRESSED_PDF_FOLDER\NEWESTBILL.PDF This copy is smaller then the one in PDFACT and has the same name but it is just lumped in with every other PDF. The folder structure and subfolders are gone. Is there any way to replace all the larger uncompressed PDFS inside the orginal folder structure with their now compressed counterparts?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Decompress a PSC File

    - by Kevin Laity
    I need to restore a database that was backed up using navicat's compressed backup feature. But on the restore, there's an error coming up on one of the tables. I can't look at the SQL statements in the file directly because they're compressed, and I don't know what compression format it's in. Does anyone know of a way to decompress this file into a regular text file?

    Read the article

  • VLC Dynamic Range compression multiple songs

    - by Sion
    In my collection of music I have some songs which seem to be compressed nicely. But in addition to those I have songs which are overly quite compared to the louder compressed songs. So maybe the problem isn't compression but average volume. Would the Dynamic Range Compressor in VLC work for this type of problem or would I have better luck using external speakers and running it through a guitar compressor?

    Read the article

  • Gzip not working in browser

    - by Cathal
    According to whatsmyip.org none of my browsers (Firefox, Chrome etc) on W7 are gzip enabled, it's saying 'NO, your browser is not requesting compressed content' which agrees with Chrome developer tools as I was testing a site and it was complaining that the page and css etc weren't compressed. I've searched for an answer but cannot find anything for this. I've tested from another pc connected to the router and that works fine, something on this pc is broke.... Any help tia

    Read the article

  • Serving static content from cookie less domain and mod_deflate

    - by Saif Bechan
    I have two domain. One domain with my main website and the other with js/css/etc.. files, static content. mod_deflate is enabled for both domains, but when i run ySlow in FireFox it says none of my static content is compressed. When i bring back the js or css file to my normal domain it gets compressed right. Only when its served from the other domain is it not compressed. Do i have to do some more configuration for this to work? I am using this line in my .htaccess file AddOutputFilterByType DEFLATE application/x-javascript text/css text/html text/xml text/plain application/x-httpd-php I tried to but the line in my httpd.conf file but it gives me the same results. PS. If this is more of a serverfault question i am sorry for this. But i see a lot of questions here concerning mod_deflate and ySlow

    Read the article

  • Zlib compression in boost::iostreams not compatible with zlib.NET

    - by Johan
    Hello, I want to send compressed data between my C# to a C++ application in ZLIB format. In C++, I use the zlib_compressor/zlib_decompressor available in boost::iostreams. In C#, I am currently using the ZOutputStream available in the zlib.NET library. First of all, when I compress the same data using both libraries, the results look different: boost::iostreams::zlib_compressor: FF 13 49 48 00 00 01 00 01 00 00 00 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 zlib.NET (zlib.ZOutputStream): FF 13 49 48 00 00 01 00 01 00 00 00 78 9C 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 4F 31 63 8D (Note the 78 9C pattern that is present in zlib.NET, but not in boost). Furthermore, when I decompress data in boost that I compressed in zlib.NET, I am not able to read from the stream suggesting something is wrong. It does work when I try to decompress data compressed in boost. Does anybody know what is going wrong? Thank you, Johan

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >