Database Reference
In-Depth Information
if ( parser . isValidTemperature ()) {
int airTemperature = parser . getAirTemperature ();
context . write ( new Text ( parser . getYear ()), new
IntWritable ( airTemperature ));
} else if ( parser . isMalformedTemperature ()) {
System . err . println ( "Ignoring possibly corrupt input: " + value );
context . getCounter ( Temperature . MALFORMED ). increment ( 1 );
}
}
}
Hadoop Logs
Hadoop produces logs in various places, and for various audiences. These are summarized
in Table 6-2 .
Table 6-2. Types of Hadoop logs
Logs
Primary audi-
ence
Description
Further in-
formation
System
daemon
logs
Administrators Each Hadoop daemon produces a logfile (using log4j) and an-
other file that combines standard out and error. Written in the
directory defined by the HADOOP_LOG_DIR environment vari-
able.
System
logfiles and
Logging
HDFS audit
logs
Administrators A log of all HDFS requests, turned off by default. Written to
the namenode's log, although this is configurable.
Audit Log-
ging
MapReduce
job history
logs
Users
A log of the events (such as task completion) that occur in the
course of running a job. Saved centrally in HDFS.
Job History
MapReduce
task logs
Users
Each task child process produces a logfile using log4j (called
syslog ), a file for data sent to standard out ( stdout ), and a file
for standard error ( stderr ). Written in the userlogs subdirectory
of the directory defined by the YARN_LOG_DIR environment
variable.
This sec-
tion
YARN has a service for log aggregation that takes the task logs for completed applica-
tions and moves them to HDFS, where they are stored in a container file for archival pur-
poses. If this service is enabled (by setting yarn.log-aggregation-enable to
true on the cluster), then task logs can be viewed by clicking on the logs link in the task
attempt web UI, or by using the mapred job -logs command.
Search WWH ::




Custom Search