Information Technology Reference
In-Depth Information
of the respective features are consistent throughout the reviews ( e.g. ,inthe
Hotel domain, “hot water” has a consistently positive connotation, whereas
“hot room” has a negative one).
In order to solve this task, opine initially assigns each ( w, f ) pair w 's
SO label. The system then executes a relaxation labeling step during which
syntactic relationships between words and, respectively, between features, are
used to update the default SO labels whenever necessary. For example, (hot,
room) appears in the proximity of (broken, fan) . If “room”and “fan” are con-
joined by and , this suggests that “hot” and “broken” have similar SO labels
in the context of their respective features. If “broken” has a strongly negative
semantic orientation, this fact contributes to opine's belief that “hot” may
also be negative in this context. Since (hot, room) occurs in the vicinity of
other such phrases ( e.g. , stifling kitchen ), “hot” acquires a negative SO label
in the context of “room”.
Finding (Word, Feature, Sentence) SO Labels
This subtask is motivated by the existence of ( w , f ) pairs ( e.g. , (big, room) )
for which w 's orientation changes depending on the sentence in which the pair
appears ( e.g. , “ I hated the big, drafty room because I ended up freezing” vs.
“We had a big, luxurious room”).
In order to solve this subtask, opine first assigns each ( w, f, s ) tuple an
initial label which is simply the SO label for the ( w, f ) pair. The system then
uses syntactic relationships between words and, respectively, features in order
to update the SO labels when necessary. For example, in the sentence “I hated
the big, drafty room because I ended up freezing.”, “big” and “hate” satisfy
condition 2 in Table 2.8 and therefore opine expects them to have similar
SO labels. Since “hate” has a strong negative connotation, “big” acquires a
negative SO label in this context.
In order to correctly update SO labels in this last step, opine takes into
consideration the presence of negation modifiers . For example, in the sentence
“I don't like a large scanner either,” opine first replaces the positive ( w, f )
pair (like, scanner) with the negative labeled pair (not like, scanner) and then
infers that “large” is likely to have a negative SO label in this context.
After opine has computed the most likely SO labels for the head words of
each potential opinion phrase in the context of given features and sentences,
opine can extract opinion phrases and establish their polarity. Phrases whose
head words have been assigned positive or negative labels are retained as opin-
ion phrases . Furthermore, the polarity of an opinion phrase o in the context
of a feature f and sentence s is given by the SO label assigned to the tuple
( head ( o ) ,f,s ).
2.3.6 Experiments
In this section we evaluate opine's performance on the following tasks: finding
SO labels of words in the context of known features and sentences ( word
Search WWH ::




Custom Search