Database Reference
In-Depth Information
Anchoring and acking
We have talked about DAG that is created for the execution of a Storm topology. Now
when you are designing your topologies to cater to reliability, there are two items that
needs to be added to Storm:
• Whenever a new link, that is, a new stream is being added to the DAG, it is called
anchoring
• When the tuple is processed in entirety, it is called acking
When Storm knows these preceding facts, then during the processing of tuples it can gauge
them and accordingly fail or acknowledge the tuples depending upon whether they are
completely processed or not.
Let's take a look at the following WordCount topology bolts to understand the Storm API
anchoring and acking better:
SplitSentenceBolt : The purpose of this bolt was to split the sentence into
different words and emit it. Now let's examine the output declarer and the execute
methods of this bolt in detail (specially the highlighted sections) as shown in the
following code:
public void execute(Tuple tuple) {
String sentence = tuple.getString(0);
for(String word: sentence.split(" ")) {
_collector.emit(tuple, new Values(word)); //1
}
_collector.ack(tuple); //2
}
public void
declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word")); //3
}
}
The output declarer functionality of the preceding code is elaborated as follows:
_collector.emit : Here each tuple being emitted by the bolt on the stream
called word (the second argument ) is anchored using the first argument of the
Search WWH ::




Custom Search