Databases Reference
In-Depth Information
Table 3.2: Dialogue act tag
categories.
Ta g
Description
S
Statement
P
Polite mechanism
QY
Yes-no question
AC
Action motivator
QW
Wh-question
A
Accept response
QO
Open-ended question
AA
Acknowledge and appreciate
QR
Or/or-clause question
R
Reject response
U
Uncertain response
QH
Rhetorical question
We will also see that, when the focus of the conversation is to make a joint decision and come
up with a set of action items, a critical mining task is to detect what turns in a conversation relate
to the underlying decision making process. Making joint decisions is much more often the goal for
meeting and email conversations, than for blogs, discussion forums and chats.
3.4.2 DIALOGUE ACT MODELING
The task of labeling each turn in a conversation, with the dialogue act(s) it is intended to perform,
can be framed as a supervised machine learning classification problem. Since a turn can perform
multiple speech acts, a relatively simple technique is to define, for each dialogue act in the tagset, a
binary classifier that can determine if a given turn is or is not performing the corresponding dialogue
act. Then, to determine the dialogue act labels for a turn, one simply applies all the binary classifiers
to that turn and collects the accepted dialogue act labels.
Cohen et al. [ 2004 ] have followed this approach, focusing on email conversation in work
environments, when people negotiate and coordinate joint activities (e.g., scheduling a meeting).
After analyzing several email corpora, they developed an email act tagset which aimed to capture
common communication patterns in email usage at work. Their dialogue acts tagset consists of
several verbs that can be applied to nouns ; for instance, the act of delivering a Powerpoint presentation
or the act requesting the recipient to perform some activity (e.g., committee membership ). In order to
implement and test the supervised approach, the corpora were annotated with this tagset. Agreement
among annotator was moderate ( κ in the 0.72-0.83 range), which is quite common for dialogue act
annotation, especially when the tag are not too specific. Several experiments were then run to compare
different feature sets. In general, the performance of the approach is overall not satisfying, with an
 
Search WWH ::




Custom Search