Using a "local" S3 emulation layer as a replacement for HDFS?

Posted by user183394 on Stack Overflow See other posts from Stack Overflow or by user183394
Published on 2012-09-17T06:09:48Z Indexed on 2012/09/17 15:38 UTC
Read the original article Hit count: 342

Filed under:
|
|
|

I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good.

Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook.

Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments.

I would appreciate a hint as to how I can change the end point to a different hostname.

Regards,

--Zack

© Stack Overflow or respective owner

Related posts about hadoop

Related posts about amazon-s3