Database Reference
In-Depth Information
these .trace.log files are introduced with hDinsight cluster version 2.1. in version 1.6 clusters, the file names
are out.log .
Note
The following two sections are specific to HDInsight clusters in version 1.6. The log file types discussed are not
available if the cluster version is 2.1. This holds good for the Windows Azure HDInsight Emulator since, as of this
writing, it deploys HDInsight components version 1.6. In all probability, the HDInsight emulator will be upgraded
soon to match the version of the Azure service and both will have same set of log files.
Service Wrapper Files
Apart from the startup logs, there are something called wrapper logs available for the HDInsight services. These files
contain the startup command string to start the service. It also provides the output of the process id when the service
starts successfully. They are of .wrapper.log extension and are available in the same directory where the .out.log files
reside. For example, if you open hiveserver.wrapper.log you should see commands similar to the snippet below.
org.apache.hadoop.hive.service.HiveServer -hiveconf hive.hadoop.classpath=c:\apps\dist\hive-0.9.0\
lib\* -hiveconf hive.metastore.local=true -hiveconf hive.server.servermode=http -p 10000 -hiveconf
hive.querylog.location=c:\apps\dist\hive-0.9.0\logs\history -hiveconf hive.log.dir=c:\apps\dist\
hive-0.9.0\logs
2013-08-11 16:40:45 - Started 4264
Note that the process id of the service is recorded at the end of the wrapper log. This is very helpful in
troubleshooting scenarios where you may want to trace on a specific process which has already started, for example,
determining the heap memory usage of the name node process when it is running while troubleshooting an out of
memory problem.
Service Error Files
The HDInsight version 1.6 services also generate an error log file for each service. These record the log messages for
the running java services. If there are any errors encountered while the service is already running, the stack trace
of the error is logged in the above files. The error logs are of extension .err.log and they again, reside on the same
directory as the output and wrapper files. For example, if you have permission issues in accessing the required files
and folders, you may see an error message similar to below in your namenode.err.log file .
13/08/16 19:07:16 WARN impl.MetricsSystemImpl: Source name ugi already exists!
13/08/16 19:07:16 INFO util.GSet: VM type = 64-bit
13/08/16 19:07:16 INFO util.GSet: 2% max memory = 72.81875 MB
13/08/16 19:07:16 INFO util.GSet: capacity = 2^23 = 8388608 entries
13/08/16 19:07:16 INFO util.GSet: recommended=8388608, actual=8388608
13/08/16 19:07:16 INFO namenode.FSNamesystem: fsOwner=admin
13/08/16 19:07:16 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/16 19:07:16 INFO namenode.FSNamesystem: isPermissionEnabled=false
13/08/16 19:07:16 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/08/16 19:07:16 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.FileNotFoundException: c:\hdfs\nn\current\VERSION (Access is denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(St
orage.java:222)
 
 
Search WWH ::




Custom Search