Welcome to Snapshots of Social Science - a quarterly newsletter where we bring to you recent developments in diverse fields of social science. Learn more about us here.
This month is the third and final part of a series looking at the evolution of artificial intelligence, from Cybernetics to ChatGPT. Inspired by the popularity of ChatGPT, this newsletter dives into great moments in the history of how humans perceived intelligence in computers. The first part of the series explored Macy conferences and the field of cybernetics. The second part presented some moments in the second half of the 20th century, which laid the building blocks of artificial intelligence as we know it today. In this part, we will look at modern-day artificial intelligence: from a landmark moment when a computer could beat a world chess expert, to 2023 - when ChatGPT has become so common in our lives. This is in no way exhaustive, and the field is so vast that we have picked just a few moments in a big trajectory of the evolution of modern artificial intelligence.
Before we begin, please consider subscribing to ICJS Social Science by clicking the button below! This will ensure that each edition of our newsletter makes its way to you every month. We would truly appreciate it if you spread the word about us as well - our impact grows with each additional reader.
A short recap
In part 1 of this series, we explored the Macy conferences, where a new area of study called cybernetics was taking shape. These conferences brought together experts from diverse fields like mathematics, biology, physics, neuroscience, philosophy, linguistics, and sociology. We also saw how connections between biology and computer science were forming—like neurons, the DNA, self-replicating machines, and the transfer of information—which laid the foundation for cybernetics. In the background of all of this came the Dartmouth 1956 workshop. In part 2, we saw the formation of two camps of AI: symbolic and connectionist. While symbolic AI called the manipulation of symbols as intelligence, connectionist approaches represented knowledge through the form of simultaneous connections between units like neurons. We also spoke briefly about how the interest in AI waxed and waned over time, also known as AI springs and winters.
Research Map
DeepBlue and Kasparov
As much as Artificial Intelligence was making strides, it was never thought to be able to out-compete world experts. Maybe it could perform better than an average human being, but grandmasters? That was not possible.
This changed on May 11, 1997. DeepBlue, a computer developed by an IBM research team led by Murray Campbell and Feng-hsiung Hsu, defeated world grandmaster Garry Kasparov in a nail-biting series of six chess matches. DeepBlue could compute billions of moves within seconds, making it a supercomputer. A paper written by DeepBlue’s leading architects describes its construction and working: it made use of a massive number of parallel connections and a library of chess moves, among many other things.
Interestingly, though, the final match that determined DeepBlue’s victory was where it in fact failed. One of its moves was believed to be the result of a bug, which made Kasparov suddenly taken-aback. He believed that the bug was in fact a strategy that he wasn’t able to figure out, which prompted his subsequent moves. He accepted defeat in the match a few moves later.
Go
But chess isn’t the only game where the computer has gotten better than a world expert. In 2016, AlphaGo - an AI supercomputer - defeated world champion Lee Sedol in the game Go. Go is not an easy game - it is in fact believed to be harder than chess. The AI tool was developed by Google DeepMind. The way it worked was that it rated human decision-making in the game, and learned novel human strategies. This was so useful that even humans playing Go with the computer became better players themselves - almost as though they were learning from the computer (which was learning from them).
Gpt-3 and ChatGPT
GPT-3, developed by OpenAI, stands as one of the most advanced language processing AI models to date. Standing for "Generative Pre-trained Transformer 3," GPT-3 is a deep learning neural network that can generate human-like text based on the input it receives. It has a staggering 175 billion parameters, which are the aspects of the model that are learned from the training data. One of the applications of GPT-3 is ChatGPT, which is a specific implementation of the GPT-3 model designed for interactive text-based conversations. ChatGPT utilizes the vast knowledge and language skills of GPT-3 to engage in meaningful and coherent conversations with users, making it a powerful tool for various tasks such as answering questions, providing explanations, or generating creative content. ChatGPT's ability to understand context, generate contextually relevant responses, and simulate human-like conversation makes it a valuable tool in the realm of artificial intelligence and natural language processing.
And the above paragraph is ChatGPT’s response to the prompt to describe GPT-3 and itself. This shows how far we have come along from the 1940s.
Challenges and future ahead
Through this newsletter series, we learnt about the evolution of modern-day Artificial Intelligence, and all of its achievements. But what does the road ahead look like? Are we in the middle of an AI spring, waiting for an AI winter? What challenges does AI face? Will it ever be able to replace human beings? Should we be in awe of or should we fear AI?
First, this is a space where legal regulations are ambiguous, and any innovation’s ability to impact society will be based on how it is regulated. The ability to impact society through AI will also be based on who owns the innovation and what they intend to use the innovation for. The interplay between business, technology, and law will be an important factor in determining what AI will look like in the future.
The use of AI in a lot of fields can still be limited. For example, in the legal field, there are philosophical concerns with using algorithms to make decisions like bail-setting, even if judges are shown to possibly be biased in doing the same tasks. This is because law is designed for humans, by humans. Replacing this with an algorithm defeats its philosophy. At the same time, in fields like medicine, a high level of precision and accuracy is important. While AI performs very well (in fact, probably better than humans) in such kinds of tasks, can we trust a machine with our lives? This is not to say that the legal and medical fields are not using AI—they indeed are, and in transformational ways. But an AI tool’s applicability will also have to pass the test of upholding trust and the field’s philosophy.
The last caveat: will AI ever be able to experience emotions and consciousness the way we do? There are conflicting responses to this. Emotions are also a part of rationality and intelligence. If AI can be intelligent in other respects, why not emotionally? But at the same time, would it feel? What is the experience of feeling something? What is consciousness?
I will leave you to ponder on these questions. I hope you enjoyed reading this three-part series on Artificial Intelligence - from Cybernetics in the 1940s to ChatGPT in 2023.
Do watch
The documentary on DeepBlue versus Garry Kasparov
Spread the Word!
Thank you so much for reading! Please share this newsletter with anyone who would be interested. Press the button below to share!