Changing the block size of a dfs file in Hadoop

Posted by Sam on Stack Overflow See other posts from Stack Overflow or by Sam
Published on 2010-04-19T18:18:19Z Indexed on 2010/04/19 18:23 UTC
Read the original article Hit count: 147

Filed under:

I found that my map tasks is currently inefficient when parsing one particular set of files (total 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster, does anyone know the command that would change the block size when I upload it (ie copy from local to the dfs)?

Thanks!

© Stack Overflow or respective owner

Related posts about hadoop