Databases Reference
In-Depth Information
WORD CLOUD VISUALIZATION
TEXTUAL SUMMARY
Friends discuss vacation plans and an assignment
Figure 1.9: Simple multimedia Summary of our synthetic email conversation.
Textual vs. Multimedia Output Traditionally, research on text summarization has been about
taking as input one or more documents and generating as output a textual summary of those docu-
ments. As we have already seen, even if we are restricted to textual output, there are two possibilities.
The output can be either a subset of the sentences from the input (i.e., an extract), or a set of novel
sentences that are automatically generated to describe the most important content extracted from
the input (i.e., an abstract).
Depending on the user task and information needs, more possibilities can be envisioned, if we
move beyond textual summaries. Arguably, all the information mined from a conversation could be
conveyed graphically. For instance, extracted topics could be visualized like a theme river , in which the
temporal evolution of the strength of different topics is depicted as a multi-colored visual river flowing
from left to right [ Havre et al. , 2002 ]. Similarly, extracted opinions can be also effectively conveyed
graphically. Pang and Lee [ 2008 ] present some illustrative examples in Chapter 5 of their topic
(Summarization Chapter). More generally, any information visualization for text analysis, like Word
information
visualiza-
tion
Trees and Word Cloud, could be effectively applied to text conversations [ Hearst , 2009 ](Chapter
11).
It is widely known that text and graphics are not mutually exclusive, but can actually comple-
ment each other. For instance, Carenini et al. [ 2006 ] present a multimedia opinion summarization
system, in which a visualization of the extracted opinions is integrated with a textual summary, to
support the user in the interactive exploration of the source documents. In Chapter 4 , we will see
that similar approaches can be applied to text conversations.
A simple example of a multimedia summary of our sample email conversation is shown in
Figure 1.9 .
Summarization evaluation As with all mining and retrieval tasks, it is critical to have dependable
summarization evaluation metrics to assess various systems. It is also important to have widely used
evaluation schemes so that researchers can compare results directly with one another and determine
the state of the art. In recent years, several approaches to evaluation have become popular within the
summarization community and adopted for periodic benchmark tasks.
Search WWH ::




Custom Search