Search Results

Search found 26977 results on 1080 pages for 'input device'.

Page 164/1080 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • How can I translate Linux keycodes from /dev/input/event* to ASCII in Perl?

    - by Bogdan Constantinescu
    I'm writing a Perl script that reads data from the infamous /dev/input/event* and I didn't find a way to translate the key codes generated by the kernel into ASCII. I'm talking about the linux key codes in this table here and I can't seem to find something that would help me translate them without hardcoding an array into the script. Am I missing something? I'd like to skip the array part because it doesn't seem to be a good practice, so any idea? :)

    Read the article

  • How do you sink input and output to a text file in R?

    - by Jeromy Anglim
    How do you sink both the console input and the console output to a text file? Take the following code: sink("temp.txt") 1:10 sink() It will write a text file that looks like this: [1] 1 2 3 4 5 6 7 8 9 10 But how do I create a text file that looks like this: > 1:10 [1] 1 2 3 4 5 6 7 8 9 10 I've looked at ?sink and searched R-help.

    Read the article

  • Is frozenset adequate for caching of symmetric input data in a python dict?

    - by Debilski
    The title more or less says it all: I have a function which takes symmetric input in two arguments, e.g. something like def f(a1, a2): return heavy_stuff(abs(a1 - a2)) Now, I want to introduce some caching method. Would it be correct / pythonic / reasonably efficient to do something like this: cache = {} def g(a1, a2): return cache.setdefault(frozenset((tuple(a1), tuple(a2))), f(a1, a2)) Or would there be some better way?

    Read the article

  • Why does Joda time change the PM in my input string to AM?

    - by Tree
    My input string is a PM time: log(start); // Sunday, January 09, 2011 6:30:00 PM I'm using Joda Time's pattern syntax as follows to parse the DateTime: DateTimeFormatter parser1 = DateTimeFormat.forPattern("EEEE, MMMM dd, yyyy H:mm:ss aa"); DateTime startTime = parser1.parseDateTime(start); So, why is my output string AM? log(parser1.print(startTime)); // Sunday, January 09, 2011 6:30:00 AM

    Read the article

  • convert table of input values into 2D array or struct?

    - by Henry
    What's the easiest way to convert a table of values (2D grid like Excel), into a 2D array or struct in ColdFusion? Thought of using dot in the name and eval them to become struct, but AFAIK name attribute of input field can only contains alpha numeric and underscore, and first char must be an alpha char.

    Read the article

  • Bash: how to process variables from an input file?

    - by gilgongo
    I've got a bash script that reads input from a file like this: while IFS="|" read -r a b do echo "$a something $b somethingelse" done < "$FILE" The file it reads looketh like this: http://someurl1.com|label1 http://someurl2.com|label2 However, I'd like to be able to insert the names of variables into that file when it suits me, and have the script process them when it sees them, so the file might look like this: http://someurl1.com?$VAR|label1 http://someurl2.com|label2 So $VAR could be, for example, today's date, producing an output like this: http://someurl1.com something label1 somethingelse http://someurl2.com?20100320 something label2 somethingelse

    Read the article

  • Bash: Is it ok to use same input file as output of a piped command?

    - by Amro
    Consider something like: cat file | command > file Is this good practice? Could this overwrite the input file as the same time as we are reading it, or is it always read first in memory then piped to second command? Obviously I can use temp files as intermediary step, but I'm just wondering.. t=$(mktemp) cat file | command > ${t} && mv ${t} file

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >