Search Results

Search found 802 results on 33 pages for 'compressed sensing'.

Page 3/33 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Selenium RC cannot test on compressed html

    - by JH
    In order to have the fast speed of website, the web sever compress (gzip) the html files before sending to our clients. When running selenium tests, it shows a pop-up saying: You have chosen to open ... which is a: Bin file from: http://... Would you like to save this file? "Cancel" "Save File" It seems that the compressed html file doesn't unzip and browsers recognise it as Binary file.

    Read the article

  • Motion detection in compressed domain (JPEG/Mpeg4/H264)

    - by paft
    everyone! I process video from IP cameras and have wrote a motion detection algorithm based on decompressed video analysis. But i really something more fast. I've found several papers about compressed domain analysis but have failed to find any implementations. Can anyone recommend me some code? found materials: http://www.ist-live.org/intranet/school-of-informatics-university-of-bradford001-7/41410206.pdf/view http://doc.rero.ch/lm.php?url=1000,43,4,20061128120121-NA/Bracamonte_Javier_-_A_Low_Complexity_Change_Detection_Algorithm_20061128.pdf

    Read the article

  • Updating a single file in a compressed tar

    - by Phil
    Given a compressed archive file such as application.tar.gz which has a folder application/x/y/z.jar among others, I'd like to be able to take my most recent version of z.jar and update/refresh the archive with it. Is there a way to do this other than something like the following? tar -xzf application.tar.gz cp ~/myupdatedfolder/z.jar application/x/y tar -czf application application.tar.gz I understand the -u switch in tar may be of use to avoid having to untar the whole thing, but I'm unsure how to use it exactly.

    Read the article

  • When should I use MySQL compressed protocol?

    - by ento
    I've learned that MySQL can compress communication between servers and clients. Compression is used if both client and server support zlib compression, and the client requests compression. (from MySQL Forge Wiki) The most obvious pros and cons are pros: Reduced payload size cons: Increased computation time So, is compressed protocol something I should enable whenever I can afford servers with adequate specs? Are there other factors I should consider?

    Read the article

  • Why are my images compressed?

    - by Johnny
    I'm using layout xml file for the UI. But the images are compressed and the qualities have lost in some level. My code is like this: <ImageView android:layout_width="480px" android:layout_height="717px" android:layout_x="0px" android:layout_y="45px" android:scaleType="fitXY" android:src="@drawable/e4" /> The drawable is actually 480x717. What's the problem here? Is it due to the fitXY?

    Read the article

  • Why my images are compressed?

    - by Johnny
    I'm using layout xml file for the UI. But the images are compressed and the qualities have lost in some level. My code is like this: <ImageView android:layout_width="480px" android:layout_height="717px" android:layout_x="0px" android:layout_y="45px" android:scaleType="fitXY" android:src="@drawable/e4" /> The drawable is actually 480x717. What's the problem here? Is it due to the fitXY?

    Read the article

  • UI not updated while using ProgressMonitorInputStream in Swing to monitor compressed file decompress

    - by Bozhidar Batsov
    I'm working on swing application that relies on an embedded H2 database. Because I don't want to bundle the database with the app(the db is frequently updated and I want new users of the app to start with a recent copy), I've implemented a solution which downloads a compressed copy of the db the first time the application is started and extracts it. Since the extraction process might be slow I've added a ProgressMonitorInputStream to show to progress of the extraction process - unfortunately when the extraction starts, the progress dialog shows up but it's not updated at all. It seems like to events are getting through to the event dispatch thread. Here is the method: public static String extractDbFromArchive(String pathToArchive) { if (SwingUtilities.isEventDispatchThread()) { System.out.println("Invoking on event dispatch thread"); } // Get the current path, where the database will be extracted String currentPath = System.getProperty("user.home") + File.separator + ".spellbook" + File.separator; LOGGER.info("Current path: " + currentPath); try { //Open the archive FileInputStream archiveFileStream = new FileInputStream(pathToArchive); // Read two bytes from the stream before it used by CBZip2InputStream for (int i = 0; i < 2; i++) { archiveFileStream.read(); } // Open the gzip file and open the output file CBZip2InputStream bz2 = new CBZip2InputStream(new ProgressMonitorInputStream( null, "Decompressing " + pathToArchive, archiveFileStream)); FileOutputStream out = new FileOutputStream(ARCHIVED_DB_NAME); LOGGER.info("Decompressing the tar file..."); // Transfer bytes from the compressed file to the output file byte[] buffer = new byte[1024]; int len; while ((len = bz2.read(buffer)) > 0) { out.write(buffer, 0, len); } // Close the file and stream bz2.close(); out.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); } try { TarInputStream tarInputStream = null; TarEntry tarEntry; tarInputStream = new TarInputStream(new ProgressMonitorInputStream( null, "Extracting " + ARCHIVED_DB_NAME, new FileInputStream(ARCHIVED_DB_NAME))); tarEntry = tarInputStream.getNextEntry(); byte[] buf1 = new byte[1024]; LOGGER.info("Extracting tar file"); while (tarEntry != null) { //For each entry to be extracted String entryName = currentPath + tarEntry.getName(); entryName = entryName.replace('/', File.separatorChar); entryName = entryName.replace('\\', File.separatorChar); LOGGER.info("Extracting entry: " + entryName); FileOutputStream fileOutputStream; File newFile = new File(entryName); if (tarEntry.isDirectory()) { if (!newFile.mkdirs()) { break; } tarEntry = tarInputStream.getNextEntry(); continue; } fileOutputStream = new FileOutputStream(entryName); int n; while ((n = tarInputStream.read(buf1, 0, 1024)) > -1) { fileOutputStream.write(buf1, 0, n); } fileOutputStream.close(); tarEntry = tarInputStream.getNextEntry(); } tarInputStream.close(); } catch (Exception e) { } currentPath += "db" + File.separator + DB_FILE_NAME; if (!currentPath.isEmpty()) { LOGGER.info("DB placed in : " + currentPath); } return currentPath; } This method gets invoked on the event dispatch thread (SwingUtilities.isEventDispatchThread() returns true) so the UI components should be updated. I haven't implemented this as an SwingWorker since I need to wait for the extraction anyways before I can proceed with the initialization of the program. This method get invoked before the main JFrame of the application is visible. I don't won't a solution based on SwingWorker + property changed listeners - I think that the ProgressMonitorInputStream is exactly what I need, but I guess I'm not doing something right. I'm using Sun JDK 1.6.18. Any help would be greatly appreciated.

    Read the article

  • Sending compressed data via socket

    - by Pizza
    Hi, I have to make a log server in java, and one task is to send the data compressed. Now I am sending it line by line in plain text, but I must compress it. The server handle "HTTP like" request. For example, I can get a log sending "GET xxx.log". This will entablish a TCP connection to the server, the server response with a header and the log, and close the connection. The client, reads line by line and analyzes each LOG entry. I tried some ways without success. My main problem is that I don't know where each line ends(in the client size). Any idea?

    Read the article

  • Why compressed xps are corrupt?

    - by Gio
    compressed xps documents do not pass isxps.exe test and do not display in xpsviewer. i'm using Windows 7 and vs2010. Dim ms = New MemoryStream() Dim P As Package = Package.Open(ms, FileMode.Create, FileAccess.ReadWrite) Dim DocumentUri As Uri = New Uri("pack://document.xps") PackageStore.AddPackage(DocumentUri, P) Dim document As XpsDocument = New XpsDocument(P, CompressionOption.Maximum, DocumentUri.AbsoluteUri) If i change CompressionOption.Maximum to CompressionOption.None all works perfectly. IsXps.exe says "Unable to open Zip archive Error code: 0x8000FFFF", same with default W7 XpsViewer. what i'm doing wrong? (i've also installed latest 7zip program, but i do not think that it may corrupt the default windows zip capability......or not?)

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • Generate jpeg compressed Tiff using RGB colorspace (using Java + JAI)

    - by nOiSe gaTe
    I'm trying to make tiled pyramidal tiffs from master tiff images using Java (JAI 1.1.3 + imageIO). The problem is: I NEED to make tiff files compressed in jpeg with RGB photometric interpretation and no matter what I try, the image still faces YCbCr photometric interpretation. Note: if I change compression type (eg. LZW or Deflate) I can get RGB colorspace but, as I said, I need jpeg compression. Except this detail the tiled PTiff I create it's ok, so I think it's better to focus the attention on simple compression step (uncompressed tiff -- jpeg tiff) this may be a basic example (removing any check to make it more readable) // reading MASTER BufferedImage img = ImageIO.read(uncompressedTiff); ImageOutputStream ios=null; ImageWriter writer=null; ios = ImageIO.createImageOutputStream(outFile); Iterator <ImageWriter> writers = ImageIO.getImageWritersByFormatName("tiff"); writer = writers.next(); // saving and compression params writer.setOutput(ios); ImageWriteParam param = writer.getDefaultWriteParam(); param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT); param.setCompressionType("JPEG"); param.setCompressionQuality(0.8f); param.setTilingMode(ImageWriteParam.MODE_EXPLICIT); param.setTiling(256, 256, 0, 0); IIOMetadata metadata = getWriteMD(writer, param); // writing writer.write(metadata, new IIOImage(img, null, metadata), param); in getWriteMD method: .... TIFFTag photoInterp = base.getTag(BaselineTIFFTagSet.TAG_PHOTOMETRIC_INTERPRETATION); TIFFField fieldPhotoInter = new TIFFField(photoInterp, BaselineTIFFTagSet.PHOTOMETRIC_INTERPRETATION_RGB); .... here is the full version of getWriteMD()

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • How to uncompress the NSData in Php which is compressed using zlib in Iphone

    - by Gaurav Arora
    Hello Everyone, I am quite new to Iphone development , so please bear me if I ask some some common questions. In my application I have to transfer data from my Iphone app to a PHP server and for this I have to compress the NSdata in my Iphone app and then pass it on to the PHP server and then Uncompress it in PHP and process the data sent by Iphone in PHP. For compressing the data in Iphone I have used zlib library.Now on PHP side I want to uncompress this data , but I am unable to do so. Can anyone help me in uncompressing this data in PHP. Thanks in Advance. Gaurav Arora

    Read the article

  • Producing CCITT compressed TIFF from CGImage

    - by Brian Postow
    I have a CGImage (core graphics, C/C++). It's grayscale. Well, originally it was B/W, but the CGImage may be RGB. That shouldn't matter. I want to create a CCITT-Group 4 TIFF. I can create an LZW TIFF (grayscale or color) via creating a destination with the correct dictionary and adding the image in. No problem. However, there doesn't seem to be an equivalent kCGImagePropertyTIFFCompression value to represent CCITT-4. It should be 4, but that produces uncompressed. I have a manual CCITT compression routine, so if I can get the binary (1 bit per pixel) data, I'm set. But I can't seem to get 1 BPP data out of a CGImage. I have code that is supposed to put the CGImage into a CGBitmapContext and then give me the data, but it seems to be giving me all black. I've asked a couple of questions today trying to get at this, but I just figured, lets ask the question I REALLY want answered and see if someone can answer it. There's GOT to be a way to do this. I've got to be missing something dumb. What is it?

    Read the article

  • Problem cube mapping in OpenGL using DDS compressed images

    - by Paul Jones
    Hi All, I am having trouble cube mapping when using a DDS cube map, I'm just getting a black cube which leads me to believe I have missing something simple, here's the code so far: DDS_IMAGE_DATA *pDDSImageData = LoadDDSFile(filename); //compressedTexture = -1; if(pDDSImageData != NULL) { int height = pDDSImageData->height; int width = pDDSImageData->width; int numMipMaps = pDDSImageData->numMipMaps; int blockSize; GLenum cubefaces[6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, }; if( pDDSImageData->format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT ) blockSize = 8; else blockSize = 16; glGenTextures( 1, &textureId ); int nSize; int nOffset = 0; glEnable(GL_TEXTURE_CUBE_MAP); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for(int face = 0; face < 6; face++) { for( int i = 0; i < numMipMaps; i++ ) { if( width == 0 ) width = 1; if( height == 0 ) height = 1; glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE); nSize = ((width+3)>>2) * ((height+3)>>2) * blockSize; glCompressedTexImage2D(cubefaces[face] , i, pDDSImageData->format, width, height, 0, nSize, pDDSImageData->pixels + nOffset ); nOffset += nSize; // Half the image size for the next mip-map level... width = (width / 2); height = (height / 2); } } } Once this code is called I bind the texture using glBindTexture and draw a cube using GL_QUADS and glTexCoord3f. Thank you for reading and all comments are welcome, sorry if any of the formatting comes out wrong.

    Read the article

  • Listing the contents of a LZMA compressed file?

    - by ssn
    Is it possible to list the contents of a LZMA file (.7zip) without uncompressing the whole file? Also, can I extract a single file from the LZMA file? My problem: I have a 30GB .7z file that uncompresses to 5TB. I would like to manipulate the original .7z file without needing to do a full uncompress.

    Read the article

  • converting compressed Rich text (C#)

    - by Code Smack
    Hello, I have a string with CompressedRichText in it. I want to uncompressed it to standard RTF and then to plain Text. I am using VS 2008 C# Windows Application. I found references to C++ code to do it, but nothing directly in C#. here is a sample of what I am thinking... not much code though... private string _CompressedRichText = "5A46751234E1234A3214C12D34213412346F4"; private void button1_Click(object sender, EventArgs e) { //Convert to Uncompressed RTF richTextBox1.Text = //Some action //Convert to text textBox1.Text = //some action } This will at least give you an example of what I am thinking.

    Read the article

  • Tell if IIS is being asked to serve compressed pages?

    - by Graham
    Hi, I'm trying to find out if our IIS server is being asked to serve pages compressed. I'm a noob regarding a lot of this so am working my way through the issues. We're using IIS 6.0 and have correctly turned compression on. If I use Fiddler2 to analyse the HTTP requests via localhost, then Fiddler reports that the pages are compressed. If we then access the server over the network, either via its external URL or via the internal server name, Fiddler reports those pages as uncompressed. Therefore, it's logical to assume that something is getting in the way - presumably our ISA server. Our ISA administrator states that ISA is configured to allow compressed requests but what I want to do is to look at the requests coming through to IIS to see if IIS is being asked to serve pages compressed. I'm fairly convinced that our request is going to ISA, ISA is forwarding these, but not with the "compression" details - therefore IIS is not performing any compression. I've looked at the IIS logs but can't see anything obvious about the HTTP request. Is there any way I can check, on the web server itself, this sort of information? One thing that is confusing, but it may be normal, is that the Client IP making the request is not the orignal PC (i.e. mine) and not the ISA firewall, but the web server itself... Thanks

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • How Can I Determine if HTTP Requests/Responses are compressed in IE7?

    - by DTS
    I'm trying to use Fiddler (v2.2.2.0) to see if HTTP traffic through IE7 is being compressed. I'm not seeing Accept-Encoding or Content-Encoding request/response headers being sent/returned and I do not need to decode the response data once it's arrived, which leads me to believe that the responses are NOT coming back compressed. However, when making the same requests using FireFox 3.5.7, I could see through FireBug that FF was sending Accept-Encoding and YSlow at least thought my data was coming back compressed. A comment in this question: http://stackoverflow.com/questions/897989/using-fiddler-to-check-iis-compression suggested that a proxy server may be to blame for stripping out headers and decompressing the content for security reasons. I am using Verizon FIOS for my broadband at home and am now wondering if Verizon is proxying my HTTP traffic? In short, how can I positively confirm/deny that responses are coming back compressed through IE? Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >