A Brief History of Artificial Intelligence #1: How long will it take for computers to become smarter than humans?
Herbert Simon, one of the pioneers of artificial intelligence, guaranteed in 1965, “In 20 years, machines will be able to do everything humans can.” In 2021, well after 50 years, machines are not doing all the work that humans can do, but they are stepping into the human realm throughout society with remarkable developments. Going one step further, AI networks are trying to connect the world with artificial intelligence. Knowing exactly what artificial intelligence is is a necessary for AI network community. We look at the history, development process, and future of artificial intelligence. Today we’re going to talk about the main events and characters of the 1950s and 1960s, in the early days of artificial intelligence.
#Alan Turing (Turing Machine, Turing Test)
British computer scientist and mathematician Alan Turing is often called the ‘father of computer science.’ He came up with something called the Turing Machine early on. The Turing machine is a device that can perform calculations or logical manipulations in sequence and is capable of any calculation given the appropriate algorithm. It was a hypothetical machine in the imagination, but it was a machine that inspired artificial intelligence.
He also suggested a Turing test that laid the foundation for artificial intelligence. Alan Turing’s paper, ‘The Calculator and the Intelligence,’ suggests that machines can think like humans. In the paper, he argued that if you can’t talk to a computer and tell it from a human response, you should consider it ‘thinkable.’ It was in 1950 that he proposed this test.
# IBM’s Checker Game
Checker is similar to the chess we know, but it’s a little different. In 1952, Arthur Samuel developed the first machine learning-based checker program. The game, which he developed while working for IBM, was built on IBM 701, IBM’s first commercial computer. The most complicated game at the time was developed using the method of learning from experience.
On February 24, 1956, IBM 701’s checker program was first shown on TV, and in 1962, checker master Robert Neely played checker against a computer. As you can only remember AlphaGo, people at the time were as surprised as AlphaGo when they saw that the computer won. This news was published extensively in the next day’s newspaper. It was like the first time people knew that the intellectual superiority of mankind could be challenged by electronic monsters. Perhaps it was an example of the early 1960s that showed the public the function of the computer and the milestones of artificial intelligence. Believe it or not, IBM stocks have skyrocketed because of this game. It’s about 70 years ago. He was a pioneer in the field of artificial intelligence in the United States and popularized the term “machine learning” we know in 1959.
Around the same time, in the summer of 1956, the historic Dartmouth Conference was held at Dartmouth College in the United States. John MaCarthy, who organized the conference, was a professor of mathematics at Dartmouth College. He said the meeting was based on speculation that ‘a machine can be built to simulate all aspects of learning or other features of intelligence because they can be described very accurately in principle.’ He wanted to create a framework for better understanding of human intelligence, a way to make machines more cognitive. He used the word artificial intelligence for the first time to describe this. He is also the founder of the AI programming language LISP.
John McCarthy, a professor at Dartmouth University who proposed a meeting where 10 researchers gathered to study artificial intelligence for two months, said he first used the word ‘artificial intelligence’ when he proposed the meeting. The actual workshop took place a year later, between July and August 1956. In this proposal, the authors defined what artificial intelligence means.
He said, ‘Tries will be made to make machines use language, shape abstractions and concepts, solve the kinds of problems currently left to humans, and find ways to improve themselves…If humans behave like that, it’s the same context as operating a machine in a way that’s called intelligence.’
#Program Challenging Thoughts, GPS
Two scholars, Herbert Simon and Ellen Newell, who attended the Dartmouth Conference with McCarthy, first created what was called a ‘logic theorist’ and then a program called GPS that made it even more. GPS was an attempt to model a human problem-solving process, which was transferred to a computer. He argued that computers, which were considered to be ‘number’ calculations faster than humans at the time, could actually manipulate all kinds of symbols, whether they were numbers or not. they thought that the behavior of the mind when humans solve problems and the symbolic manipulation that computers do when they process programs were very similar. As I said earlier, he already boasted in 1965, “In the next 20 years, machines will be able to do anything that a person can do.”
The 1956 Dartmouth Conference is often known as the signal for AI, or AI research. In the 1950s and 1960s was a period of optimism about artificial intelligence rather than pessimism, which was the first step toward artificial intelligence.
AI network is a blockchain-based platform and aims to innovate the AI development environment. It represents a global back-end infrastructure with millions of open source projects deployed live.
If you want to know more about us,
- AI Network website: https://ainetwork.ai/
- AI Network Official Telegram Group (English): https://t.me/ainetwork_en
- Ainize: https://ainize.ai
- AI Network YouTube: https://www.youtube.com/channel/UCnyBeZ5iEdlKrAcfNbZ-wog
- AI Network Facebook: https://www.facebook.com/ainetworkofficial
- AI Network Twitter: https://twitter.com/ai__network