Information Technology Reference
In-Depth Information
queries of their own. Tip clickers also write better queries, probably because, after
seeing the column display, they have a much better sense of how queries can be
composed. Keyword tip clickers typically find the results engaging enough to spend
an average of 1.5 minutes studying the results: 37% of users go back and click on
more than one tip. Even better, 87% follow up by clicking on the “Try your own
Fact Search” link and try their own query. All of the queries attempted are queries;
90% produce results; our follow up analysis suggests that for two thirds of these
queries the results are relevant to the users search goals. In other words, users who
click on the tips are extremely likely not only to try their own fact search, but also
to pay enough attention to the format to write both valid and useful queries.
Examples in the Help File or Query Generator are largely ineffective at getting
users to try Fact Search. Because the results returned by the examples usually do
not necessarily relate to what the user wishes to search on, the column display is
more of a distraction than an enticement to try Fact Search. However, those who go
on to try Fact Search, after clicking on an example, have a better chance of writing
good queries. Example link clickers are less likely to experiment with Fact Search
or invest time learning how it works. Seventy-two percent of users end their session
after clicking on one or more examples, not even returning to perform the keyword
search that presumably brought them to the site in the first place. Of the 28% who
did not leave the site after clicking an example, two thirds went on to try a Fact
Search. Only 6% of users click on examples after having tried a Fact Search query
on their own. Analysis of this user group suggests that examples have a place in the
UI, but are not su ciently compelling to motivate users to try Fact Search alone.
However, this evidence does lend support to the hypothesis that users who see the
column display are more likely to create valid queries: 60% of the users who click on
examples and go on to write their own queries write valid queries and get results,
which is still a much higher percentage than for users who blindly try to create
queries.
About 75% of users who try Fact Search directly by using the IQL syntax, and
without seeing the column display first fail to get results. Forty-five percent of users
write invalid queries where nouns are inserted in the action field (the most com-
mon error). Another common error is specifying too much information or attaching
prepositions to noun phrases. We can detect some of these errors automatically,
and we plan to provide automatic guidance to users going forward. About 20% of
query creators get impressive results. Most successful users get their queries right
on the first shot, and, in general, seem unwilling to invest much time experimenting.
Successful users are most likely expert analysts. In reproducing their searches and
inspecting their results, we estimate that they have a positive impression of Fact
search. In 75% of cases the results of Fact Search take direct the user quickly to the
relevant parts of relevant documents, providing a deeper overview and faster naviga-
tion of content. However, in 25% of cases, expert users also write queries that return
no results. Reasons for this include specifying too much information or including
modifiers or prepositional terms in the verb field such as: “cyber attack,” “led by,”
and “go to.” In many cases users would be successful by just entering the verb. In
some cases, users get lots of fact search results, but lack the experience to refine
their query, so they simply go back to keyword search. We should try to communi-
cate how queries can be modified further if there are too many results, perhaps by
adding an ontology tag, or a context operator to the query syntax. For instance, the
query “Bush > meet > [person]” could yield a large number of irrelevant results, if
Search WWH ::




Custom Search