Search Results

Search found 725 results on 29 pages for 'compress'.

Page 13/29 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Awesome new feature for HCC

    - by Steve Tunstall
    I've talked about HCC (Hybrid Columnar Compression) before. This is Oracle's built-in compression feature, free of charge in 11Gr2, that allows a CRAZY amount of compression on historical data inside an Oracle database. It only works if the database is being stored in a ZFSSA, Exadata or Axiom. You can read all about it in this whitepaper, which shows the huge value of HCC when used with the ZFSSA. http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-hybrid-columnar-compression-1689701.html Now, even better, Oracle has announced  a great new feature in Oracle 12c called "Automatic Data Optimization". This allows one to setup HCC to AUTOMATICALLY compress data AS IT AGES.  So this is now ILM all built into the Oracle database. It's free for crying out loud. It just needs to be sitting on Oracle storage, such as the ZFSSA, Exadata or Axiom.  Read about ADO here: http://www.oracle.com/technetwork/database/automatic-data-optimization-wp-12c-1896120.pdf?ssSourceSiteId=ocomen

    Read the article

  • How to get pngcrush to overwrite original files?

    - by DisgruntledGoat
    I've read through man pngcrush and it seems that there is no way to crush a PNG file and save it over the original. I want to compress several folders worth of PNGs so it would be useful to do it all with one command! Currently I am doing pngcrush -q -d . *.png then manually cut-pasting the files from the tmp directory to the original folder. So I guess using mv might be the best way to go? Any better ideas?

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • How to take a screenshot every n second?

    - by Seppo Erviälä
    What software can I use to take screenshots with a set interval? I'd like to take screenshots every 2 second or so. Command-line and GUI are both ok. I'd prefer software that can also resize and compress each screenshot. EDIT: What I realized I really wanted to do was take a screenshot and a picture with webcam at the same time. I ended up doing some python: import threading import os def capture(i): i += 1 threading.Timer(2.0, printit, [i]).start() fill = str(i).zfill(5) os.system("scrot scrot-%s.jpg" % fill) os.system("streamer -o streamer-%s.jpeg -s 320x240 -j 100" % fill) capture(0)

    Read the article

  • Good practice about Javascript referencing

    - by AngeloBad
    I am fighting about a web application script optimization. I have an ASP.NET web app that reference jQuery in the master page, and in every child page can reference other library or JavaScript extension. I would like to optimize the application with YUI for .NET. The question is, I should put all the libraries reference in the master page or to compress all the JavaScript code in a single file, or I should create a file for every page that contains only the code useful to the page? Is there any guidance to follow? Thanks!

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • Tool to convert Textures to power of two?

    - by 3nixios
    I'm currently porting a game to a new platform, the problem being that the old platform accepted non power of two textures and this new platform doesn't. To add to the headache, the new platform has much less memory so we want to use the tools provided by the vendor to compress them; which of course only takes power of two textures. The current workflow is to convert the non power of tho textures to dds with 'texconv', then use the vendors compression tools in a batch. So, does anyone know of a tool to convert textures to their nearest 'power of two' counterparts? Thanks

    Read the article

  • Building an web/ mobile app like instagram [on hold]

    - by John
    I would like to build an app like instagram or twitter. User can upload photo, type a few words, hashtag, share their location. And there will be a page like newsfeed showing updates. User can login with oauth. How do I store those data especially photos? (In those cloud thins? like Google cloud? I don't know how those cloud works) and what is the cost of it (if can compress user uploaded photos?). I currently only knows php, javascript and mysql.

    Read the article

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

  • md5deep error with after compression and extraction

    - by Sai
    I am using md5deep to generate the checksum for the contents of an entire directory. After generation, if I use the checksum file and run a check, there are no errors. However, if I compress the file and extract it, it seems to generate an error about a missing file. The file is just called .md5. There is no file called .md5 inside the directory. It seems to create a file that is not existent. This problem exists when I use md5sum or md5deep. Any thoughts on why this is happening?

    Read the article

  • LINQ Lycanthropy: Transformations into LINQ

    LINQ is one of the few technologies that you can start to use without a lot of preliminary learning. Also, it lends itself to learning by trying out examples. With Michael Sorens' help, you can watch as your conventional C# code changes to ravenous LINQ before your very eyes. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Which of Your Stored Procedures are Using the Most Resources?

    Dynamic Management Views and Functions aren't always easy to understand. However, they are the easiest way of finding out which of your stored procedures are using up the most resources. Greg takes the time to explain how and why these DMVs and DMFs get their information. Suddenly, it all gets clearer. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Top 10 Transact-SQL Statements a SQL Server DBA Should Know

    Microsoft SQL Server is a feature rich database management system product, with an enormous number of T-SQL commands. With each feature supporting its own list of commands, it can be difficult to remember them all. MAK shares his top 10 T-SQL statements that a DBA should know. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • How to distribute applications?

    - by Dr Deo
    I am new to Ubuntu development. As a learning experience, I have written a custom chat application using qt4 and I want to deploy it in some sort of setup file. Whats the easiest way of deploying an application viz a viz setting desktop icons. automatically requesting for administrator privileges to execute. inserting an entry into the startup menu. automatically compress my application and reduce download size. automatic startup for my application without user intervention I am familiar with using NSIS scripts on Windows, but I don't know where to begin on Ubuntu. I would preffer a solution similar to NSIS scripts.

    Read the article

  • What web oriented language would work best with binary data?

    - by Qqwy
    I want to create a service where people can upload files. However, since file storage costs money, I want to compress the files so they take less space. I would want to write my own compression algorithm, however, PHP doesn't have good ways to handle binary data (which is needed for many compression algorithms). So I wondered, what would be a better language to create such a website in? I have knowledge of PHP (and Javascript, HTML and CSS) but no experience with other things like Ruby, Perl, Python, and other web development languages.

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • Column-Level Encryption in SQL Server

    Beginning with SQL Server 2005, column-level encryption and decryption capabilities were made available within the database, providing a solution for situations where one-off types of data need to be secured beyond your existing authorization, authentication or firewall settings. This article provides an overview and example of securing a column using native SQL Server cryptography functions. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Good Compression for Slow-mo Video

    - by marienbad
    What's the best way to deliver super slow-motion video to the browser? This seems to me to be a special case, because with super slow-mo video (such as 10,000 frames per second) the visual difference from frame to frame is minimal. As such, it's easy to compress highly. Please suggest codecs, as well as encoding software, backend software, software configuration tips, and services like youtube. My goal is to get about 100 frames of QVGA video to the browser in 500KB. By the way, remember that Radiohead In Rainbows site?

    Read the article

  • read object from compressed file that generate from actionscript3

    - by Last Chance
    I have made a simple game Map Editor, and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); now I had successful save a compressed object to a file. now the problem is my server side need to read this file and decompress get the arr out from file, then convert it as python list. is that prossible?

    Read the article

  • Read an object from compressed file generated from ActionScript 3

    - by Last Chance
    I have made a simple game Map Editor and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); I had successfully saved a compressed object to a file. Now the problem is my server side need to read this file and decompress the array out from the file, then convert it to a Python list. Is that possible?

    Read the article

  • SQL Server Migration Assistant 2008 (SSMA)

    One of my client’s requirements is to migrate and consolidate his company departments’ databases to SQL Server 2008. As I know the environment, they are using MySQL , MS-Access and SQL Server with different applications. Now the company has decided to have a single dedicated SQL Server 2008 database server to host all the applications. So there are a few things to do to upgrade and migrate from MySQL and MS-Access to SQL Server 2008. For the migration task, I found the SQL Server Migration Assistant 2008 (SSMA 2008) is very useful which reduces the effort and risk of migration. So in this tip, I will do an overview of SSMA 2008. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Generating SQL Server Test Data with Visual Studio 2010

    As a database developer or tester sometimes you need to have production like data in your environment for your development or testing, but you cannot have the production data because of security and privacy issues. So how you can generate test data or replicate similar data as in production for your development or test environment? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Automatic Statistics Update Slows Down SQL Server 2005

    I have a database which has several tables that have very heavy write operations. These table are very large and some are over a hundred gigabytes. I noticed performance of this database is getting slower and after some investigation we suspect that the Auto Update Statistics function is causing a performance degradation. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Using INSERT / OUTPUT in a SQL Server Transaction

    Frequently I find myself in situations where I need to insert records into a table in a set-based operation wrapped inside of a transaction where secondarily, and within the same transaction, I spawn-off subsequent inserts into related tables where I need to pass-in key values that were the outcome of the initial INSERT command. Thanks to a Transact/SQL enhancement in SQL Server, this just became much easier and can be done in a single statement... WITHOUT A TRIGGER! Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Is backing up a MySQL database in GIT a good idea?

    - by wobbily_col
    I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in synch. But GIT is a designed for code, not for data. As such it will be doing a lot of extra work diffing the mysql dump every commit, which is not really necessary. If I compress the file before storing it, will git diff the files? (The dump file is currently 100MB uncompressed, 5.7Mb when bzipped). Edit: the code and database schema definitions are already in GIT, it is really the data I am concerned about backing up now.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >