Database Reference
In-Depth Information
sum += val.get();
context.write(ByteBufferUtil.bytes(word.toString()),
Collections.singletonList(getMutation(word, sum)));
}
private static Mutation getMutation(Text word,
int sum)
{
Column c = new Column();
c.setName(ByteBufferUtil.bytes("count"));
c.setValue(ByteBufferUtil.bytes(sum));
c.setTimestamp(System.currentTimeMillis());
Mutation m = new Mutation();
m.setColumn_or_supercolumn(new
ColumnOrSuperColumn());
m.column_or_supercolumn.setColumn(c);
return m;
}
}
The complete source code of this MapReduce job can be found with the downloads
for this topic. The executable class is TwitterHDFSJob
( com.apress.chapter5.mapreduce.twittercount.hdfs package). You
also can refer to the README.txt (under src/main/resources ) file for further
instructions about setting the database and running this MapReduce job.
After successfully executing the job, the output in the tweetcount column fam-
ily will be as shown here:
[default@tweet_keyspace] list tweetcount;
Using default limit of 100
Using default cell limit of 100
-------------------
RowKey: Mon May 11 06:16:04 IST 2009
=> (name=count, value=334, timestamp=1407569960904)
-------------------
Search WWH ::




Custom Search