Database Reference
In-Depth Information
- Options.minWorkerThreads = 200
2013-08-16 21:24:40,090 INFO metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(3048))
- Options.maxWorkerThreads = 100000
2013-08-16 21:24:40,091 INFO metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(3050))
- TCP keepalive = true
2013-08-16 21:24:40,104 INFO metastore.HiveMetaStore (HiveMetaStore.java:logInfo(392))
- 1: get_databases: default
2013-08-16 21:24:40,123 INFO metastore.HiveMetaStore
Logging initialized using configuration in file:/C:/apps/dist/hive-0.9.0/conf/hive-
log4j.properties
2013-08-16 21:25:03,078 INFO ql.Driver (PerfLogger.java:PerfLogBegin(99)) - <PERFLOG
method=Driver.run>
2013-08-16 21:25:03,078 INFO ql.Driver (PerfLogger.java:PerfLogBegin(99)) - <PERFLOG
method=compile>
2013-08-16 21:25:03,145 INFO parse.ParseDriver (ParseDriver.java:parse(427))
- Parsing command: DROP TABLE IF EXISTS HiveSampleTable
2013-08-16 21:25:03,445 INFO parse.ParseDriver (ParseDriver.java:parse(444))
- Parse Completed
2013-08-16 21:25:03,541 INFO hive.metastore (HiveMetaStoreClient.java:open(195))
- Trying to connect to metastore with URI thrift://headnodehost:9083
2013-08-16 21:25:03,582 INFO hive.metastore (HiveMetaStoreClient.java:open(209))
- Connected to metastore.
2013-08-16 21:25:03,604 INFO metastore.HiveMetaStore (HiveMetaStore.java:logInfo(392))
- 4: get_table : db=default tbl=HiveSampleTable
Again, the preceding log output is stripped for brevity, but you can see how the log emits useful information, such
as several port numbers, the query that it fires to load the default tables, the number of worker threads, and much
more. In the case of a Hive processing error, this log is the best place to look for further insight into the problem.
a lot of documentation is available on apache's site regarding the logging framework that hadoop and its
supporting projects implement. that information is not covered in depth in this chapter, which focuses on
hDinsight-specific features.
Note
Log4j Framework
There are a few key properties in the Log4j framework that will help you maintain your cluster storage more efficiently.
If all the services are left with logging every bit of detail in the log files, a busy Hadoop cluster can easily run you out
of storage space, especially in scenarios where your name node runs most of the other services as well. Such logging
configurations can be controlled using the Log4j.properties file present in the conf directory for the projects. For
example, Figure 11-4 shows the configuration file for my Hadoop cluster.
 
 
Search WWH ::




Custom Search