Artificial Intelligence
What if these theories are really true, and we were magically shrunk and put into someone's brain while he was thinking. We would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!
Gottfried Willhelm Leibniz (1679).
Not even a century ago -- in fact, not even a half-century ago -- few people could have imagined the present-day world with computers operating most of the government and business processes and the Internet running in millions of homes. Thus it would have been nearly impossible to comprehend artificial intelligence (AI) and that scientists would try to create a machine (AI) to learn, adapt, reason, correct or improve itself. Whether or not this will become a reality is still unknown. AI pioneer Chris Langton says that this "intelligent entity" will never be possible. He believes, "when scientists are faced with the choice of either admitting that the computer process is alive, or moving the goalposts to exclude the computer from the exclusive club of living organisms, they will choose the latter." Is this true? Will humans never admit that a computer can actually function as real life? Or will they instead decide there is nothing special about life, and humanity can therefore be designed, built and replicated? At least for the time being, there is no answer to this dilemma.
According to the American Association for Artificial Intelligence, AI is "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines." The evolution of this science actually became noticed as early as 1821 when Charles Babbage stared at a table of logarithms and said, "I think that all these tables might be calculated by machinery." From then on, the scientist devoted his life to developing the first programmable computer.
Much later, in 1943, Babbage's idea finally took hold when Warren McCulloch (a psychiatrist, cybernetician, philosopher, and poet) and Walter Pitts (a research student in mathematics) published an innovative paper combining early twentieth-century ideas on computation, logic, and the nervous system. In fact, the report promised to revolutionize psychology and philosophy. The next year, Harvard University applied these ideas to develop the first American programmable computer, the Merck I.
It did not take long for British scientist Alan Turing to see the similarity of the computational process to that of human thinking. In his paper, "Comparing Machinery and Intelligence," he explained the direction for the remainder of the century -- developing computers for game playing, decision-making, natural language understanding, translation, theorem proving and encryption code cracking.
To help recognize if and when a computer had actually become intelligent, Turing suggested the idea of the "imitation game" where an interrogator would interview a human being and a computer and not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish the two by questioning, then it would be unreasonable not to call the computer intelligent.
Turing's game is now usually called "the Turing test for intelligence."
In the 1950s, Newell, Shaw and Simon created the program Logic Theorist (later called General Problem Solver), which used recursive search techniques, or defining a solution in terms of itself. IBM developed the first program that could play a full game of chess in 1957. The following year, Newell, Shaw and Simon noted, "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until -- in a visible future -- the range of problems they can handle will be co-extensive with the range to which the human mind has been applied (Simon, p. 3).
In 1967, an MIT computer won the first tournament match against a human player. The world chess champion Gary Kasparov said in 1988 that there was "no way" a grandmaster would be defeated by a computer in a tournament before 2000. Ten months later he lost the bet. However, many people changed their tune and said that winning a championship game really did not require "real" intelligence. For a number of persons, the connection between human and machine was getting a little too close for comfort.
This was exactly...
Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.
Get Started Now