Information Technology Reference
In-Depth Information
course was “How many feet in a mile?” … I loved the idea, went straight to
NBC with the idea, and they bought it without even looking at a pilot show. 10
Each round of the game has six categories, each with five clues for a differ-
ent amount of money ( Fig. 14.13 ). The categories ranged from standard top-
ics such as history, science, literature, and geography to popular culture and
word games, such as puns. An example in the category of “U.S. Presidents”
could be the clue “The Father of Our Country; he didn't really chop down a
cherry tree.” The contestant would have to reply “Who is George Washington?”
As an example of wordplay, under the category “Military Ranks” could be
the clue “Painful punishment practice,” to which the answer is “What is cor-
poral?” The first contestant to “buzz” after the host has read the entire clue
wins the chance to have the first guess. With a correct reply, he or she wins
the designated amount for the clue; a wrong reply loses the amount of the clue
and allows the other contestants a chance to buzz in. The game also has three
“Daily Double” clues, which allow contestants to wager a minimum of $5 up to
the maximum of all their winnings, and a “Final Jeopardy!” round where the
contestants write down their answer and may gamble all their winnings. If they
answer correctly, they win their bet, and a successful gamble can transform the
result of the game. The contestant with the highest total after the final round
is the winner.
The first Jeopardy! superstar contestant was Ken Jennings, a computer pro-
grammer from Salt Lake City, Utah. From June until November 2004, Jennings
had an amazing seventy-four-game winning streak and won more than 2.5 mil-
lion dollars. Instead of the audience becoming bored, ratings for the show jumped
by more than 50 percent. A key factor in Jennings's dominance was his lightning
fast reflexes: he won the race to the buzzer on more than half the clues.
Paul Horn's suggestion for IBM to produce a machine to play Jeopardy! was
controversial. It was not until a year later that he was able to persuade David
Ferrucci ( B.14.5 ), head of the Semantic Analysis and Integration Department at
IBM Research, to take on the challenge. Ferrucci had many reasons for his skep-
ticism. One of the teams he led was developing a question-answering system
called Piquant, short for Practical Intelligent Question Answering Technology.
Each year, there was a contest at the Text Retrieval Conference, a gathering of
researchers focusing on information retrieval, in which competing teams were
given a million documents on which to train their system. In these competi-
tions, based on this very restricted knowledge base, the IBM Piquant system
got two out of three questions wrong. In an initial six-month trial period, the
Piquant team was trained to answer Jeopardy! questions using five hundred spec-
imen clues. Although Piquant did better than a search engine-based approach
that used the Web and Wikipedia, it succeeded only 30 percent of the time.
From this first disappointing trial, Ferrucci concluded that he needed to adopt a
much broader approach that made use of multiple AI technologies. He therefore
assembled machine-learning and natural language processing experts from IBM
Research and reached out to university researchers at Carnegie Mellon and MIT.
Undaunted by the result of the trial, Ferucci told Horn that he would deliver a
Jeopardy! machine that could compete with humans within twenty-four months.
He gave the project the code name Blue-J. A year later, the resulting machine
was christened “Watson” for IBM's first president, Thomas J. Watson.
Fig. 14.12. The Watson computer at IBM
laboratory in Yorktown Heights, New
York, with the Watson logo.
Fig. 14.13. A typical Jeopardy!
game board
B.14.5. David Ferrucci graduated
with a degree in biology and a PhD
in computer science. His main
research areas are natural language
processing, knowledge representa-
tion, and discovery. He joined IBM in
1995 and led the “Watson/ Jeopardy!
project from its inception in 2007.
After an initial feasibility study,
Ferucci assembled a twenty-five-
member team that, in four years,
developed a system that not only
could “understand” spoken language
but also could beat the superstar
winners of the question-answering
game Jeopardy!
 
Search WWH ::




Custom Search