Map Reduce job on Amazon: argument for custom jar
Posted
by zero51
on Stack Overflow
See other posts from Stack Overflow
or by zero51
Published on 2010-06-13T06:46:27Z
Indexed on
2010/06/13
6:52 UTC
Read the original article
Hit count: 225
Hi all,
This is one of my first try with Map Reduce on AWS in its Management Console. Hi have uploaded on AWS S3 my runnable jar developed on Hadoop 0.18, and it works on my local machine. As described on documentation, I have passed the S3 paths for input and output as argument of the jar: all right, but the problem is the third argument that is another path (as string) to a file that I need to load while the job is in execution. That file resides on S3 bucket too, but it seems that my jar doesn't recognize the path and I got a FileNotFound Exception while it tries to load it. That is strange because this is a path exactly like the other two...
Anyone have any idea?
Thank you
Luca
© Stack Overflow or respective owner