Search Results

Search found 2696 results on 108 pages for 'compression formats'.

Page 6/108 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • Creating transparent PNG with exact RGBA values

    - by rrowland
    I'm color-coding a transparent image to be read programatically. However, the image seems to be getting compressed and my code is reading color values other than the ones I mean to pass it. Concept This is the output I get, exporting as PNG-24. I programatically check each pixel for one of the six colors I use in creating the image: 0x00000F 0x0000F0 0x000F00 0x00F000 0x0F0000 0xF00000 Each color represents a different texture to apply. Top right (0x00000F) will pull texture from the tile to its top right and blend it at a ratio equal to the opacity of the pixel. The end goal is to create a hex tiled grid with differing textures that blend smoothly. What's happening It seems that when converting to PNG, Photoshop will change the RGBA to make it smoother, or just to help compression size. Parts that should be 250 red range anywhere from 150 to 255. Question Whether using PNG or another web-compatible format, I need to be able to save these pixel values, essentially instructions, loss-less and still maintain transparency. Is this possible in any format or will I need to re-think my approach?

    Read the article

  • File formats to download ringtones

    - by Osvaldo
    What file formats and other specifications must be used to give ringtones (to download) in a website? I'm interested in giving away just one ringtone. The target audience uses smartphones with Android, iOS and Windows Phones launched in the last 2/3 years. Is it necessary to include instructions or is it something relatively easy to do? Or can't be done for some reason? The ringtone has to be downloaded to a desktop first? Or has to be downloaded from the mobile phone while accessing the web page with the download?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Differentiating between user script input formats

    - by KChaloux
    I have a .NET project at work that provides a couple of (Iron)Python scripts to the customers, to allow them to customize the output of the program. The application generates code for certain machines, and supports a couple of different formats. Until recently, we only provided a script for one format. We're expanding upon that to include support for the others. If the user is using a script, they select their input script before generating the output code. A script designed for Format1 output is going to cause errors if they're trying to generate Format2 output. I need to deal with this. One option would just be to let the customers use common sense, and if they load the wrong script it will just fail, or worse, produce inaccurate data. I'm inclined to provide a little more protection than that. At the moment I'm considering putting a shebang-style comment line at the top of the script, ala: # OUTPUT - Format1 If the user tries to run a Format2 process with a Format1 script, it will warn them. Alternatively I could create different file extensions for the input scripts that vary by type. The file-type comment approach helps prevent the script from actually loading improperly, at the cost of failing to warn the user until they've already selected it, via a dialog box. Using different file extensions would allow me to cut down on visual clutter when providing a File Dialog, but doesn't actually stop them from loading the wrong script. So I'm really not sure if the right approach is to just leave it alone, or provide some safeguards.

    Read the article

  • Is there a compression method for compressing a group of very similar files, without archiving them?

    - by awiebe
    I want to compress a large nuber of files that have near identical headers, and also some data, however I do not wish to archive them, nor do I wish to zip them individually(because the copression ratio would be much higher if substitutions of similar blocks could be done using a single table). Does a compression method exist to do this already, or should I implement it myself. Note: Don't say "Disk space is cheap", because I may want to use this on an embedded system.

    Read the article

  • btrfs and missing free space

    - by easteregg
    I converted my ext4 partition to btrfs and deleted the save subvolume after doing so. Then I enabled the compression (lzo) of the filessystem in the fstab file and everything is correct so far. Then I forced the compression of all files using the defragmentation command with the parameter -c that the new compression is applied to all files. While doing so, I noticed that my ssd got completly filled up - before I had 6gigs of free space. No I got nothing left. easteregg@x201s:~$ btrfs fi df / Data: total=50.00GB, used=49.17GB System: total=32.00MB, used=4.00KB Metadata: total=24.50GB, used=9.86GB and easteregg@x201s:~$ df -ha Filesystem Size Used Avail Use% Mounted on /dev/sda1 75G 60G 852M 99% / So now. How can I regain my free space. I expected to gain more space because of the lzo compression. And now! The fs is correctly mounted. easteregg@x201s:~$ mount /dev/sda1 on / type btrfs (rw,noatime,ssd,compress=lzo) Any ideas how to fix this issue?

    Read the article

  • Question about Byte-Pairing for data compression [closed]

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Best Regards.

    Read the article

  • How can I maximally compress .gz files in Nautilus?

    - by Takkat
    When selecting Compress... from the right click context menu in Nautilus I am able to quickly compress files to .gz format. However by default Nautilus does not use maximum compression. Can I make Nautilus to use maximum compression like gzip -9? Using gconftool or gconf-editor to set the compression_level for File Roller to maximum seems right but infortunately has not the desired effect and will not lead to maximum compressed files. As this is the expected way of how to set compression levels a bug report has been filed upstream. Any ideas for a workaround are welcome.

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • Does Compressed Sensing bring anything new to data Compression?

    - by anon
    Compressed sensing is great for situations where capturing data is expensive (either in energy or time), and thus less samples can now be taken. However, in situations like image compression, given that the data is already on the computer -- does compressed sensing offer anything? For example, would it offer better data compression? Would it result in better image search?... (Note: If you don't know what the field of Compressed Sensing is, please do not respond.)

    Read the article

  • .NET - A way to add my own clipboard format to existing formats

    - by A9S6
    I have a Excel addin that displays some structures on the worksheet. Users can copy the structures and paste them in another worksheet or another application which is handled by the Clipboard formats. When a user copies the structure, I convert the structure into a specific format and put it on the clipboard using the DataObject::SetData(). Please note that when a copy is initiated in Excel, it puts a number of formats on the clipboard (see image). The problem is that there is a third party application that depends on the data on the clipboard(Copy from Excel and paste into this 3rd party app) but the funny thing is that I am not sure which format it depends on. I need to preserve the existing formats that Excel has put up there and also add my own format to it. Currently when I use the Clipboard class in .NET (taking the DataObject and calling SetData inside it), all the other formats are replaced by new ones. I then tried to create a new DataObject, copy the existing format data to this data object and then set this data object in the Clipboard. This works fine but it takes time to copy the data. // Copying existing data in clipboard to our new DataObject IDataObject existingDataObject = Clipboard.GetDataObject(); DataObject dataObject = new DataObject(); string[] existingFormats = existingDataObject.GetFormats(); foreach (string existingFormat in existingFormats) dataObject.SetData(existingFormat, existingDataObject.GetData(existingFormat)); I am looking for a solution to just access the existing DataObject and quietly add my own data to it without affecting other formats. Excel Clipboard Formats - (Ignore the Native Format)

    Read the article

  • Sharp HealthCare Reduces Storage Requirements by 50% with Oracle Advanced Compression

    - by [email protected]
    Sharp HealthCare is an award-winning integrated regional health care delivery system based in San Diego, California, with 2,600 physicians and more than 14,000 employees. Sharp HealthCare's data warehouse forms a vital part of the information system's infrastructure and is used to separate business intelligence reporting from time-critical health care transactional systems. Faced with tremendous data growth, Sharp HealthCare decided to replace their existing Microsoft products with a solution based on Oracle Database 11g and to implement Oracle Advanced Compression. Join us to hear directly from the primary DBA for the Data Warehouse Application Team, Kim Nguyen, how the new environment significantly reduced Sharp HealthCare's storage requirements and improved query performance.

    Read the article

  • Oracle saves with Oracle Database 11g and Advanced Compression

    - by jenny.gelhausen
    Oracle Corporation runs a centralized eBusiness Suite system on Oracle Database 11g for all its employees around the globe. This clustered Global Single Instance (GSI) has scaled seamlessly with many acquisitions over the years, doubling the number of employees since 2001 and supporting around 100,000 employees today, 24 hours a day, 7 days a week around the world. In this podcast, you'll hear from Raji Mani, IT Director for Oracle's PDIT Group, on how Oracle Database 11g and Advanced Compression is helping to save big on storage costs. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • How do I get the compression on specific dynamic body

    - by Mike JM
    Sorry, I could not find any tag that would suit my question. Let me first show you the image and then write what I want to do: I'm using box2D. As you can see there are three dynamic bodies connected to each other (think of it as a table from front view).The LEG1 and LEG2 are connected to the static body. (it's the ground body). Another dynamic body is falling onto the table. I need to get the compression in the LEG1 and LEG2 separately. Joints have GetReactionForce() function which returns a b2Vec, which in turn has Length() and LengthSqd functions. This will give the total sum of the forces in any taken joint. But what I need is forces in individual bodies that are connected with joints. Once you connect several bodies with a single joint it again will show the sum of forces which is not useful.Here's the case iI'm talking about:

    Read the article

  • Hybrid Columnar Compression

    - by user12620172
    You heard me in the past talk about the HCC feature for Oracle databases. Hybrid Columnar Compression is a fantastic, built-in, free feature of Oracle 11Gr2. One used to need an Exadata to make use of it. However, last October, Oracle opened it up and now allows it to work on ANY Oracle DB server running 11Gr2, as long as the storage behind it is a ZFSSA for DNFS, or an Axiom for FC. If you're not sure why this is so cool or what HCC can do for your Oracle database, please check out this presentation. In it, Art will explain HCC, show you what it does, and give you a great idea why it's such a game-changer for those holding lots of historical DB data. Did I mention it's free? Click here: http://hcc.zanghosting.com/hcc-demo-swf.html

    Read the article

  • How to enable gzip HTTP compression on Windows Azure dynamic content

    - by Steven
    Hi all, I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests. I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate! I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio. So, according to the following link, HTTP compression has now been enabled .. http://msdn.microsoft.com/en-us/library/ff436045.aspx ... and I've used the advice at the following page (the URL compression advice only), but I get no compression. http://blog.smarx.com/posts/iis-compression-in-windows-azure <urlCompression doStaticCompression="true" doDynamicCompression="false" dynamicCompressionBeforeCache="true" /> It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail! Could the fact that the tools for Visual Studio were released before the version of Azure which supports compression be a problem? I read somewhere that with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version? I've also tried blowery http compression module, but no results. Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS. Cheers! Steven

    Read the article

  • Apache Tika atteint la version 1.0 : 1200 formats supportés par le Toolkit Java de détection, extraction et analyse de données

    Apache Tika disponible en version 1.0 Le Toolkit de détection, d'extraction et d'analyse de données supporte désormais 1200 formats de fichiers Après cinq années de développement, le projet open source Tika arrive à maturité et arbore fièrement le numéro de version rond : 1.0. C'est un toolkit Java léger et facilement intégrable, destiné à la détection, l'extraction et l'analyse de métadonnées et de données texte structurées à partir d'une très large variété de formats de fichiers (1200 à l'heure d'écriture de ces lignes). Parmi ces formats, on retrouve : HTML, XML, Microsoft Office, OpenOffice/OpenDocument, PDF, images, ebooks/EPUB, Rich Text, divers formats de com...

    Read the article

  • Jumpshare Makes It Dead Simple To Drag, Drop, and Share 150+ File Formats

    - by Jason Fitzpatrick
    If you’re looking for a super simple way to share files with friends and coworkers Jumpshare offers drag and drop file transfer with a powerful built in file viewer. You don’t need to register, install any software, or do anything but drag the file, drop it onto the Jumpshare interface, and share the link with your friend. Share the link and your friend can watch the real-time progress of the file upload as well as download or view the completed files within the Jumpshare file viewer. Files are hosted for two weeks before deletion. Hit up the link below to take it for a test drive. Jumpshare HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • Should SQL Server tools target wide screen formats instead of portrait formats?

    - by Greg Low
    There was a short discussion on the SQL Down Under mailing list this morning about screen resolutions for working with the SQL Server tools. In particular, the issue was about how unusable the tools are on the 1366x768 resolution notebooks that now seem to be the most common. While finding a notebook with an appropriate resolution is obviously the answer at this time, I started thinking that the product itself needs to address this. SQL Server tools currently target a portrait 4:3 shape for minimum...(read more)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >