Search Results

Search found 35907 results on 1437 pages for 'text compression'.

Page 22/1437 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Textwrangler (OS X) -- Simple Text Macro Help Needed

    - by bobber205
    I often, when parsing log/error files, need to replace < and < and > with in order to be able to efficiently understand what's going on in the files. I know TextWrangler has a macro ability but I can't figure out a efficient way to do this. Since I have to do it so often I'd love to just have a simple keybinding or menu item to do this simple replace/find all for me. Anyone know how to do this? ^_^

    Read the article

  • Rich Text Editor (YUI Simple Text Editor used) not sending data to next page

    - by Aman Chhabra
    I am using a simple text editor from YUI, but when I click submit the code in the textarea/editor is not sent to the next page. I want to be able to receive it on the subsequent page and then store it in a DB(MySql). I have wasted lot of time already. Please help. HTML FILE: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/combo?2.9.0/build/yahoo-dom-event/yahoo-dom-event.js&2.9.0/build/container/container_core-min.js&2.9.0/build/element/element-min.js&2.9.0/build/editor/simpleeditor-min.js"></script> <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/2.9.0/build/editor/assets/skins/sam/simpleeditor.css" /> <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/2.8.2r1/build/assets/skins/sam/skin.css"> <!-- Utility Dependencies --> <script src="http://yui.yahooapis.com/2.8.2r1/build/yahoo-dom-event/yahoo-dom-event.js"></script> <script src="http://yui.yahooapis.com/2.8.2r1/build/element/element-min.js"></script> <!-- Needed for Menus, Buttons and Overlays used in the Toolbar --> <script src="http://yui.yahooapis.com/2.8.2r1/build/container/container_core-min.js"></script> <script src="http://yui.yahooapis.com/2.8.2r1/build/menu/menu-min.js"></script> <script src="http://yui.yahooapis.com/2.8.2r1/build/button/button-min.js"></script> <!-- Source file for Rich Text Editor--> <script src="http://yui.yahooapis.com/2.8.2r1/build/editor/editor-min.js"></script> <script> YAHOO.util.Event.on('submit', 'click', function() { myEditor.saveHTML(); var html = myEditor.get('element').value; }); (function() { var Dom = YAHOO.util.Dom, Event = YAHOO.util.Event; var myConfig = { height: '200px', width: '900px', dompath: true, }; myEditor = new YAHOO.widget.SimpleEditor('msgpost', myConfig); myEditor.render(); })(); </script> </head> <body class="yui-skin-sam"> <form action="submit.php" method="post"> <textarea name="msgpost" id="msgpost" cols="50" rows="10"></textarea> <input type="submit" value="Submit" onsubmit="return myDoSubmit()"/> </form> </body> </html>

    Read the article

  • HTTP compression - can I configure a client to compress the data sent to a server?

    - by lgomide
    Hello, I'm using IIS 7 as web server for my application. If I enable dynamic content compression in the server, will this also enable clients to send compressed data to the server, if they can? I mean, my application uses SOAP webservices, and clients usually send large chunks of data to the server. The clients are written in C#/.NET. Is there any kind of configuration I can do in a web reference / serice reference in order to tell them to compress the content before they send it to IIS? And do I have to do any kind of configuration in IIS in order for this to work? Thanks in advance

    Read the article

  • [SOLVED] flash 10, as3 - save text of textarea and reuse it to replace text in textarea

    - by user427969
    hi everyone Is it possible to save text of textarea (flash 10, as3, cs5) in some variable or so and with its textformat (more than one color) and then reuse it to replace text in textarea? I tried saving htmlText of textarea but the problem is when i replace it in textarea tags causes problem. There will always be another extra line. If anyone wants to view p tags problem try following. Just click on text and then move your down arrow key, cursor will go to next line. import fl.controls.TextArea; var txtHTML:TextArea = new TextArea(); txtHTML.move(0,0); var default_format:TextFormat = new TextFormat(); default_format.font = "Arial"; default_format.bold = false; default_format.align = "center"; default_format.color = 0xFFFF00; default_format.size = 14; var field:TextField = txtHTML.textField; field.defaultTextFormat = default_format; field.setTextFormat(default_format); field.alwaysShowSelection = true; field.background = true; field.type = 'input'; field.multiline = true; field.backgroundColor = 0x777777; field.embedFonts = true; txtHTML.htmlText = '<P ALIGN="CENTER"><FONT FACE="_sans" SIZE="14" COLOR="#FFFF00" LETTERSPACING="0" KERNING="0">ASDF</FONT></P>'; field.x = 0; field.y = 0; field.width = 400; field.height = 200; field.text = ""; addChild(txtHTML); Is there a way to do this? Thanks alot for any help. Regards

    Read the article

  • Rails3 renders a js.erb template with a text/html content-type instead of text/javascript

    - by Yannis
    Hi, I'm building a new app with 3.0.0.beta3. I simply try to render a js.erb template to an Ajax request for the following action (in publications_controller.rb): def get_pubmed_data entry = Bio::PubMed.query(params[:pmid])# searches PubMed and get entry @publication = Bio::MEDLINE.new(entry) # creates Bio::MEDLINE object from entry text flash[:warning] = "No publication found."if @publication.title.blank? and @publication.authors.blank? and @publication.journal.blank? respond_to do |format| format.js end end Currently, my get_pubmed_data.js.erb template is simply alert('<%= @publication.title %>') The server is responding with the following alert('Evidence for a herpes simplex virus-specific factor controlling the transcription of deoxypyrimidine kinase.') which is perfectly fine except that nothing happen in the browser, probably because the content-type of the response is 'text/html' instead of 'text/javascript' as shown by the response header partially reproduced here: Status 200 Keep-Alive timeout=5, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=utf-8 Is this a bug or am I missing something? Thanks for your help!

    Read the article

  • How to bring an application from Sublime Text to a web IDE for sharing?

    - by Kyle Pennell
    I generally work on my projects locally in Sublime Text but sometimes need to share them with others using things like Jsfiddle, codepen, or plunker. This is usually so I can get unstuck. Is there an easier way to share code that doesn't involve purely copy pasting and the hassle of getting all the dependencies right in a new environment? It's taking me hours to get some of my angular apps working in plunker and I'm wondering if there's a better way.

    Read the article

  • What options are out there for an embeddable WYSIWIG text editor?

    - by Evan Plaice
    I'm thinking something along the lines of TinyMCE Please include a list of features. Examples include: supports text formatting supports links supports images syntax types (markdown/wiki/etc) licensing and/or pricing customizibility plugin support browser compatibility Note: Please limit the answers to one editor per answer to preserve cleanliness Update: Forgot to add browser compatibility to the list

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • Delete text from copied text

    - by riddle
    Hello, I would to use JavaScript to clean up text that’s being copied from my site. I use snippets like this: body { vertical-align: middle; ? } Where ? indicates comment later on. I want readers to copy this snippet and use it – so I need to delete that Unicode marker. How can I access text that’s being copied and make changes to it? I considered deleting marker(s) from snippet when user clicks (mousedown) on it, so she could select the text, copy it and then I would restore markers but it seems a really long way to do it.

    Read the article

  • Within headers, images with alt text vs. text

    - by Court
    Do search engines treat the alt text of an image placed within an h1 tag the same way they would treat regular text placed in an h1 tag? I gave a search through here looking for an answer to this question, but was only able to find information on image replacement and the infamous h1 debate. For example would: <h1><img src="#" alt="Contact Us" /></h1> Act the same as: <h1>Contact Us</h1> In the electronic eye of a search engine? This seems considerably less "CSS Hacky" than other image replacement techniques like negative text indents, display:none, height:0, or ridiculous z-index integers.

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Use ImageMagick to convert TIFF to PNGs, how to improve the speed?

    - by Woo
    I am using "convert" from IM to get PNGs from multi-page TIFF files, everything is good except the speed. From "convert" documentation, I found: For the MNG and PNG image formats, the quality value sets the zlib compression level (quality / 10) and filter-type (quality % 10). For compression level 0, the Huffman-only strategy is used, which is fastest but not necessarily the worst compression. The default PNG compression is 75. So I tried "-quality 0", but almost no changes with the spreed. Anyone can share the ideas of how to improve the spreed? Here are my command: convert 100Pages.tif[0,1,2,3,4,5] -quality 0 100Pages.png Thanks!

    Read the article

  • copy relative path of file

    - by efr
    I can copy full path of file: { "keys": ["super+i"], "command": "copy_file_path" } But don't know how to copy its relative path (relative to project folder) I even created .sublime-project { "folders": [ { "path": ".", "folder_exclude_patterns": ["node_modules"], "follow_symlinks": true } ] } But copy_file_path still copies absolute path.

    Read the article

  • Why does compressing and decompressing my SSD hard drive free up space?

    - by Paperflyer
    I bought an SSD (SandForce 2), created a tiny 25GB partition on it for Windows and installed Windows 7 64-bit. In order to free disk space, I enabled compression on the drive using the Properties entry in the context menu for the drive in Explorer. Prior to compressing I had around 5GB of free space. After compression I had 4GB, so compression was not working for me. I figured this might have happened because of the built-in data compression of the SSD. I decompressed the files again - after decompression, it left me with 7GB of free space! Better yet, after restarting, I had 10GB. What is happening here?

    Read the article

  • Exception when indexing text documents with Lucene, using SnowballAnalyzer for cleaning up

    - by Julia
    Hello!!! I am indexing the documents with Lucene and am trying to apply the SnowballAnalyzer for punctuation and stopword removal from text .. I keep getting the following error :( IllegalAccessError: tried to access method org.apache.lucene.analysis.Tokenizer.(Ljava/io/Reader;)V from class org.apache.lucene.analysis.snowball.SnowballAnalyzer Here is the code, I would very much appreciate help!!!! I am new with this.. public class Indexer { private Indexer(){}; private String[] stopWords = {....}; private String indexName; private IndexWriter iWriter; private static String FILES_TO_INDEX = "/Users/ssi/forindexing"; public static void main(String[] args) throws Exception { Indexer m = new Indexer(); m.index("./newindex"); } public void index(String indexName) throws Exception { this.indexName = indexName; final File docDir = new File(FILES_TO_INDEX); if(!docDir.exists() || !docDir.canRead()){ System.err.println("Something wrong... " + docDir.getPath()); System.exit(1); } Date start = new Date(); PerFieldAnalyzerWrapper analyzers = new PerFieldAnalyzerWrapper(new SimpleAnalyzer()); analyzers.addAnalyzer("text", new SnowballAnalyzer("English", stopWords)); Directory directory = FSDirectory.open(new File(this.indexName)); IndexWriter.MaxFieldLength maxLength = IndexWriter.MaxFieldLength.UNLIMITED; iWriter = new IndexWriter(directory, analyzers, true, maxLength); System.out.println("Indexing to dir..........." + indexName); if(docDir.isDirectory()){ File[] files = docDir.listFiles(); if(files != null){ for (int i = 0; i < files.length; i++) { try { indexDocument(files[i]); }catch (FileNotFoundException fnfe){ fnfe.printStackTrace(); } } } } System.out.println("Optimizing...... "); iWriter.optimize(); iWriter.close(); Date end = new Date(); System.out.println("Time to index was" + (end.getTime()-start.getTime()) + "miliseconds"); } private void indexDocument(File someDoc) throws IOException { Document doc = new Document(); Field name = new Field("name", someDoc.getName(), Field.Store.YES, Field.Index.ANALYZED); Field text = new Field("text", new FileReader(someDoc), Field.TermVector.WITH_POSITIONS_OFFSETS); doc.add(name); doc.add(text); iWriter.addDocument(doc); } }

    Read the article

  • jquery ui is not scaling text properly!

    - by Stephen Belanger
    I'm trying use jquery ui to scale a div that I'm dragging around to make it easier to see what's behind it, but any text inside it is scaling strangely. The text itself becomes smaller, but it seems to have a bunch of padding around it and is floating now. The text extends past the bottom of the div even though it should be contained properly by the div. I put a red border around the lines of text and the borders are the same size as the original text. I'm not really sure what to do to get this to work... HTML: <div class="item draggable" id="item-1'"> <div class="image-block"> <a class="delete-button" title="delete me!" href="/remove/1" onclick="return $(this).confirm(\'Really remove this image?\');">X</a> <a class="image" href="/edit/1"><img src="/someimage.jpg" /></a> <div class="clear-block"></div> </div> <h3>Some title</h3> </div> CSS: div.image-list div.item { float:left; background:#fff; width:150px; padding:5px; margin:4px; border:1px solid #d3d5d6; } div.image-list div.item h3 { margin:0; padding:0; border:solid 1px #F00; } div.image-list div.item div.image-block a.delete-button { float:right; position:relative; background:#fff; display:none; top:0.8em; margin-bottom:-20.0em; width:3em; height:1.8em; padding:0.2em 1em; } div.image-list div.item div.image-block a.image { float:left; display:block; } .clear-block { clear:both; } jquery: $(".draggable").draggable({ helper: 'clone', start: function(ev, ui) { $(ui.helper).effect( "scale", { percent: 50 }, 200 ); } });

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • What is the quickest way to indent a block of text with spaces for use within a web browser?

    - by ændrük
    I occasionally have the need to indent a block of text with spaces for use within a web browser, for example, when formatting a code block on this site or in a post on Launchpad. So far I've just done it by hand by copying four spaces to the clipboard and then mashing keys really fast: ?, Home, Ctrl+V (repeat) What is the quickest way to accomplish this? Copying and pasting to another program? (Which?) A Firefox or Chrome browser extension? A command to directly modify the clipboard contents? An auto-typing program?

    Read the article

  • Text editor with coloring to highlight "non-parameters" in conf files?

    - by Zabba
    Some .conf files have a lot of comments and parameters in them like so: # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # This will prevent nmbd to search for NetBIOS names through DNS. dns proxy = no ..... It gets difficult to look for only the parameters among the plethora of comments, so, is there some text editor that can highlight the comments in dark grey so that the real parameters stand out?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >