A Five-Part Blueprint for Empowering Corporate Transformation
Summary. Artificial intelligence is sparking a transformation of the economy at an unprecedented scale, pace, and level of uncertainty. Companies that act boldly and ahead of the curve will be positioned to capture the vast opportunities for growth and value creation. For leadership, this means grasping the far-reaching capabilities of AI as the twenty-first century’s general-purpose technology. Adopting a five-part blueprint for navigating disruptive change will help them lead their companies into the AI future.
In just a few decades, digital technologies have transformed a world tethered to landlines and devoid of personal computers and the internet to one in which algorithms and data underpin the global economy and how we live, work, and play.
But even this seismic shift may only be a warmup act for the AI era that is starting to unfold at breakneck speed. Recent rapid advancements in AI have led to models with emergent capabilities, like logical reasoning, far ahead of when these breakthroughs were forecast and to the surprise of many of the field’s most influential pioneers. And forward development of AI is now being enabled by models that are helping create AI’s two vital ingredients: data and processing power. By helping generate datasets and design enhanced processors, AI is enabling the training of even more capable AI models, like a flywheel spiraling recursively upwards.
Even in the most conservative plausible scenarios of future AI development, including no further breakthroughs like the discovery of artificial general intelligence (where a digital mind rivals human intellect across all domains, the stated aim of leading AI labs), recent advances have set the stage for transformation of profound scale and pace.
These advances are, for the first time, creating entities with sensing and decision-making capabilities that rival humans in all manner of tasks, including routine ones like driving cars, strategic ones like generating business scenarios, creative ones like composing music, and analytical ones like valuing houses.
Yet AI’s potential goes far beyond replicating human tasks, to encompass tackling previously intractable “grand challenges,” ranging from nuclear fusion to climate change and food security. One example is protein folding, where in 2021 Google DeepMind announced it had predicted the structure of almost every known protein. This is accelerating discoveries across nearly every field of biology, from precision medicine to enzymes for breaking down plastic waste.
AI’s expected near-term impact alone is startling. Various forecasts have predicted annual gains of as much as $15 trillion to global economic output by 2030,1,2,3 equivalent to the combined output of Japan, Germany, India, and the U.K., collectively 15% of the $100 trillion world economy today. Estimates based on recent advances in generative AI and other technologies suggest activities accounting for up to 30% of current employee hours in the U.S. could be automated by 2030, rising to as much as 70% beyond then.4,5
In the so-called AI arms race, governments worldwide are declaring leadership ambitions and vying to capture upsides by cultivating domestic AI industries and enabling infrastructure like supportive policy frameworks, semiconductor foundries, and even national supercomputers for training proprietary AI models. Simultaneously, they are scrambling to understand and mitigate downside risks by studying AI safety and modeling potential societal dislocations, amid what may constitute a pivotal moment in human history akin to the Industrial Revolution or even the advent of agriculture.
AI’s Emergence as a General-Purpose Technology
Modern AI has emerged as the twenty-first century’s general-purpose technology. General-purpose technologies are foundational innovations with extensive use cases. They enable seismic leaps in what humans can do, and reshape economies, societies, geopolitics, and even our physical surroundings. And as with preceding general-purpose technologies—like the internal combustion engine, which first powered Jean Joseph Etienne Lenoir’s vehicle to drive seven miles out of Paris in 1863, a full 45 years before Henry Ford’s 1908 Model T marked a turning point in the automobile’s proliferation—modern AI and the sea change it is beginning to unleash have been decades in the making.
The term “artificial intelligence” was coined in 1955 in a proposal for a Dartmouth College research project “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” That project, whose aims are now a reality, saw the emergence of AI as a distinct field. A machine’s ability to perform tasks requiring expert knowledge was first demonstrated in 1965 by Stanford University’s Dendral, an early AI system that could suggest possible molecular structures for organic compounds. The ability of machines to outperform human intelligence in specific domains was proven in 1997, when IBM’s Deep Blue defeated the reigning world champion at chess.
But surges of promise and investment in the twentieth century were often met with subsequent disappointments, leading to periods of stagnation known as “AI winters.” Progress and adoption were constrained by high development costs, limitations in past AI architectures that depended on domain-specific rules and knowledge being programmed—confining systems like Dendral and Deep Blue to single functions like predicting molecular structures and playing chess—as well as short supply of computational power and data.
Twenty-first century expansion of the digital economy has attenuated those historical challenges and seen various fields of AI become a longstanding feature of daily life, with more than half of companies sampled in some surveys reporting use of AI in at least one business function, dropping to 3% in five or more functions.6
The convergence of vast data and computational power together with modern AI architectures—including deep learning neural networks inspired by the workings and flexibility of the human brain—has propelled AI to embody a broader range of advanced capabilities and applications. This includes the advent of so-called foundation models, which are large systems trained on vast quantities of diverse data, with large language models like OpenAI’s GPT-4 being one type.
Foundation models not only perform a wide variety of functions—just as readily summarizing a 100-page technical report on battery manufacturing as finding weaknesses in legal contracts or tailoring a meal plan to a family’s dietary requirements and budget—they serve as a base for further fine tuning and adaptation to specific tasks or applications. For example, Google’s Med-PaLM 2 has been tuned from its foundation models to answer medical questions. Salesforce’s Einstein GPT leverages OpenAI’s foundation models to generate content for marketing, sales, and customer service professionals.