Database Reference
In-Depth Information
2013-08-16 21:32:40,167 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
Source name ugi already exists!
2013-08-16 21:32:40,199 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-08-16 21:32:40,199 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 72.81875 MB
2013-08-16 21:32:40,199 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^23 =
8388608 entries
2013-08-16 21:32:40,199 INFO org.apache.hadoop.hdfs.util.GSet: recommended=8388608,
actual=8388608
2013-08-16 21:32:40,245 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdp
2013-08-16 21:32:40,245 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup
2013-08-16 21:32:40,245 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=false
2013-08-16 21:32:40,261 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2013-08-16 21:32:40,261 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-16 21:32:40,292 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2013-08-16 21:32:40,355 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:
dfs.namenode.edits.toleration.length = 0
2013-08-16 21:32:40,355 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
Caching file names occuring more than 10 times
2013-08-16 21:32:40,386 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:
Read length = 4
2013-08-16 21:32:40,386 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:
Corruption length = 0
2013-08-16 21:32:40,386 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:
Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-08-16 21:32:40,386 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary:
|---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-08-16 21:32:41,855 INFO org.apache.hadoop.http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-08-16 21:32:41,855 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort()
returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-08-16 21:32:41,855 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-08-16 21:32:42,527 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
Web-server up at: namenodehost:50070
2013-08-16 21:32:42,558 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2013-08-16 21:32:42,574 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2013-08-16 21:32:42,574 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2013-08-16 21:32:42,574 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
The log gives you important information like the host name, the port number on which the web interfaces listen,
and a lot of other storage-related information that could be useful while troubleshooting a problem. In the case of an
authentication problem with the data nodes, you might see error messages similar to the following one in the logs:
2013-08-16 21:32:43,152 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:hdp cause:java.io.IOException: File /mapred/system/jobtracker.info
could only be replicated to 0 nodes, instead of 1.
Search WWH ::




Custom Search