Combining HBase and HDFS results in Exception in makeDirOnFileSystem
Posted
by
utrecht
on Server Fault
See other posts from Server Fault
or by utrecht
Published on 2014-06-09T00:36:58Z
Indexed on
2014/06/09
3:27 UTC
Read the original article
Hit count: 796
Introduction
An attempt to combine HBase and HDFS results in the following:
2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir
ectory, retries exhausted
2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled
exception. Starting shutdown.
java.io.IOException: Exception in makeDirOnFileSystem
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:136)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi
leSystem.java:428)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst
emLayout(MasterFileSystem.java:148)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst
em.java:133)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j
ava:572)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi
ssion(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce
ss(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS
Namesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames
ystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst
em.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN
odeRpcServer.java:669)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra
nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl
ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497
0)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal
l(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1438)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce
ption.java:90)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc
eption.java:57)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy
stem.java:545)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915)
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:129)
... 6 more
while configuration and system settings are as follows:
[vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/
Found 1 items
-rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u
buntu-14.04-desktop-amd64.iso
[vagrant@localhost hadoop-hdfs]$
/etc/hadoop/conf/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
/etc/hbase/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
/etc/hadoop/conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/tmp/hellodatanode</value>
</property>
</configuration>
NameNode directory permissions
[vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache
total 8
-rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock
drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current
[vagrant@localhost hadoop-hdfs]$
HMaster
is able to start if fs.defaultFS
property has been commented in core-site.xml
NameNode
is listening
[vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST
EN off (0.00/0/0)
tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA
BLISHED off (0.00/0/0)
and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp
.
Question
How to solve makeDirOnFileSystem
exception and let HBase connect to HDFS?
© Server Fault or respective owner