Database Reference
In-Depth Information
index information about the offsets for each message stored in the file. It is
allocated 10MB of space using a sparse file method by default, but if it fills
up before the segment file itself is filled, it causes a new log segment to roll
over.
# The maximum size of a log segment file. When this
size is
# reached a new log segment will be created.
log.segment.bytes=536870912
The rate at which segments are checked for expiration is controlled by log
.cleanup.interval.mins . This defaults to checking every minute and
this is usually sufficient.
# The interval at which log segments are checked to
see if they can
# be deleted according to the retention policies
log.cleanup.interval.mins=1
As mentioned in the “Kafka Prerequisites” section, Kafka uses ZooKeeper
to manage the status of the brokers in a given cluster and the metadata
associated with them. The zookeeper.connect parameter defines the
ZooKeeper cluster that the broker uses to expose its metadata. This takes
a standard ZooKeeper connection string, allowing for comma-separated
hosts. It also allows the brokers to take advantage of ZooKeeper's
chroot -like behavior by specifying a default path in the connection string.
This can be useful when hosting multiple independent Kafka clusters in a
single ZooKeeper cluster.
# Zookeeper connection string (see zookeeper docs for
details).
# This is a comma separated host:port pairs, each
corresponding to a zk
# server. e.g.
"127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the
urls to specify
# the root directory for all kafka znodes.
zookeeper.connect=localhost:2181
Search WWH ::




Custom Search