Database Reference
In-Depth Information
ZK quorum at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:131)
This problem was caused by one of the ZooKeeper servers (on hc1r1m1) running under the wrong Linux account.
It didn't have file system access. The solution was to shut it down and start as the correct user.
Another error you may encounter is:
2014-04-09 18:32:53,741 ERROR
org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.io.IOException: Unable to create data directory
/var/lib/zookeeper/zookeeper/version-2
Again, this was the same issue as for ZooKeeper in the previous error. The ZooKeeper server (on hc1r1m1) was
running under the wrong Linux account. It didn't have file system access. The solution was to shut it down and start as
the correct user.
An HBase error that could occur is:
org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate
with client version 3
This was an HBase error. IPC version 4 is for Hadoop 1.0, whereas version 7 is for Hadoop 2.0, so HBase expects
Hadoop version 1.x. That is why we are using Hadoop version 1.2.1 with Nutch 2.x.
This error occurred during the Nutch crawl:
14/04/12 20:32:30 ERROR crawl.InjectorJob: InjectorJob:
java.lang.ClassNotFoundException: org.apache.gora.hbase.store.HBaseStore
The Gora configuration was incorrect. Go back, check the setting, and retry the crawl once you've fixed it.
These errors occurred during a crawl in the HBase logs:
2014-04-12 20:52:37,955 INFO org.apache.hadoop.hbase.master.ServerManager:
Waiting on regionserver(s) to checkin
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE, inode=".logs":hadoop:supergroup:rwxr-xr-x
They indicate a file system access issue on HDFS for HBase. I had set the HBase directory permissions for the
.logs directory incorrectly.
[hadoop@hc1nn bin]$ hadoop dfs -ls /hbase/ | grep logs
drwxrwxrwx - hadoop supergroup 0 2014-04-13 18:00 /hbase/.logs
Your permissions for HDFS directories should be fine, but if you encounter a permissions access error, you can
use the hadoop dfs -chmod command to set permissions.
This error occurred because I had an error in my /etc/hosts file:
14/04/13 12:00:54 INFO mapred.JobClient: Task Id :
attempt_201404131045_0016_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.io.IOException: java.lang.RuntimeException:
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect
Search WWH ::




Custom Search