The Evolution of Artificial Intelligence

More than a century after the first computer pinged into existence, scientists began tinkering with the idea that technology could be used to emulate the human brain. The concept of a neural network1 was sparked in the 1940s and was pursued fervently. Because neural networks attempt to replicate the way humans think, they form the basis for artificial intelligence – which also needs data, computing power and speed, and algorithms. The AI Chatbots of today have been in the making for roughly 80 years with no one knowing—and some even fearing—where this all will end.

As neural networks revved up and World War II cooled down, Alan Turing proposed a barometer in 1950 that could be used to determine whether AI had human intelligence if it was undetectable as a machine in conversation with a human. This is what became known as the Turing Test3.

The 1950s also saw “artificial intelligence” officially coined at a 1956 conference at Dartmouth University2 and ended with Arthur Samuel, an IBM employee, coining the term “machine learning4” in 1959. But strict limitations persisted. Computing was extremely expensive, costing upwards of $200,000 to rent for one month2, as well as being slow and lacking the ability to store much data.

Enter Gordon Moore, the founder and CEO of Intel, who birthed Moore’s Law5. The Law states that the power and speed of computers roughly doubles every two years while the cost is halved. It is a law that has since informed the semiconductor industry and research and development targets for decades as canon. Until the 1980s, the development of AI was indeed largely limited to the capabilities of the computers needed to support them.

AI was also limited by its promises. The AI industry crashed like a corrupted hard drive from 1974 to 1980 and again from 1987 to 1993. These “AI winters6” were characterized by plummeting investment and interest after a brief period of over-promised and under-delivered hype. Nevertheless, the AI industry rebooted and made headlines in the 1990s. Notably in 1997, IBM’s AI program beat the reigning world chess champion2 and the first speech language recognition software2 was implemented on Windows. 

As far as chatbots are concerned, the journey to ChatGPT began with basic chatbots as far back as 1966. These basic chatbots7 used keyword recognition to generate scripted responses, like ELIZA in 1966, A.L.I.C.E. in 1995, and SmarterChild in 2001. Conversational Agents7 that used natural language processing and machine learning to understand and respond to complex human language came next. IBM Watson beat the reigning Jeopardy! champion in 2011, the same year that Apple released Siri. Amazon’s Alexa followed in 2014. 

A year later7, machine learning was first used to generate photos and images while Facebook developed the first deep learning facial recognition tool. In 2019 and 20207, machine learning was used in significant ways in the healthcare industry by outperforming radiologists at identifying lung cancer and by rapidly detecting COVID-19. Then in 2021, Time Magazine8 named Nvidia’s Omniverse as one of the top inventions of 2021. 

The ChatGPT-style bots, known as Generative AI Chatbots7, with advanced machine learning and larger data capacity that can compose original text and summarize content, entered our universe in 2021 with Jasper AI7. Then late last year, OpenAI launched ChatGPT and took the internet, and the world, by storm. Google quickly followed with their own version, Bard, in early 2023.

We got here at a slow and steady pace at first, with bumps and crashes in the middle, and eventually to an explosive leveling-up. Even though Gordon Moore predicted that computers would reach their limit5 some time this decade, AI has gone head-to-head with limitations throughout history and persevered. We may soon be at the point of realizing that the power of AI is, in fact, limitless.