Information Technology Reference
In-Depth Information
Feature set 3 : Cross-sentence Features
The following are two examples of cross-sentence features. The first refers to
the pair y i and the second refers to y i and y i− 1 :
1 ,i ]) = 1 , if R.CS. 1( e a ,e b )and y i = B
AFTER
h i ( y c ,x, [ i
0 , otherwise
1 , if R.CS. 1( e a ,e b )and y i− 1 = B
AFTER
h i ( y c ,x, [ i
1 ,i ]) =
and y i = I
AFTER
0 , otherwise
4 Experiments
Dataset
We use the i2b2 2012 TLINK track dataset as our evaluation dataset. The dataset
contains 190 patient history records in its training set and 120 patient history
records in its test set.
Evaluation
The results are given as F-score and defined as F = 2 ×P×R
P + R ,where P denotes
the precision and R denotes the recall. The formulae for calculating P and R
are as follows:
P = the number or correctly recognized TLink
the number of recognized TLink
R = the number or correctly recognized TLink
the number of TLink
Results
Table 1 shows the performance of our rule-based and CRFs-based approaches.
The rule-based approach achieved an F-score 55% and the CRFs-based approach
achieved an F-score 61%. Table 2 shows the performance of our CRFs-based
approach on different relation pair categories. Our approach shows higher per-
formance on event and time/section TLinks. It achieved an F-score of 80% for
section-event pairs, which is the highest among all categories, and 67% for time-
event pairs. Event-event pairs sometimes have no TLink; however, our approach
fails to classify them into NULL. As a result, in the event-event category our
system only achieved a precision of 33%. Our systems performance was worst in
the coreference category (F-score 36%), likely because we have only one rule to
classify coreference pairs.
Search WWH ::




Custom Search