In a 2012 paper, the computer scientists Hector Levesque, Ernest Davis and Leora Morgenstern proposed a more objective test, which they called the Winograd schema challenge. Even simple chatbots, such as Joseph Weizenbaum’s 1960s ersatz psychotherapist Eliza, have fooled people into believing they were conversing with an understanding being, even when they knew that their conversation partner was a machine. Unfortunately, Turing underestimated the propensity of humans to be fooled by machines. If the judge couldn’t tell which one was the human, then, Turing asserted, we should consider the machine to be thinking - and, in effect, understanding. A machine and a human, both hidden from view, would compete to convince a human judge of their humanness using only conversation. How can we determine in practice whether a machine can understand? In 1950, the computing pioneer Alan Turing tried to answer this question with his famous “imitation game,” now called the Turing test. In one study, IBM’s Watson was found to propose “multiple examples of unsafe and incorrect treatment recommendations.” Another study showed that Google’s machine translation system made significant errors when used to translate medical instructions for non-English-speaking patients. Such discussions used to be the purview of philosophers, but in the past decade AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences. When based on large neural networks, like OpenAI’s GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.īut has GPT-3 - trained on text from thousands of websites, books and encyclopedias - transcended Watson’s veneer? Does it really understand the language it generates and ostensibly reasons about? This is a topic of stark disagreement in the AI research community. The result is what researchers call a language model. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. This approach, as Watson showed, was futile - it’s impossible to write down all the unwritten facts, rules and assumptions required for understanding text. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. Natural language understanding has long been a major goal of AI research. Remember IBM’s Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, “Watson understands natural language with all its ambiguity and complexity.” However, as we saw when Watson subsequently failed spectacularly in its quest to “ revolutionize medicine with artificial intelligence,” a veneer of linguistic facility is not the same as actually comprehending human language.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |