Companies that choose to amplify human capital, rather than replace it, will make all the difference.
Ana Paula Vescovi
Artificial intelligence has ceased to be a promise and has become routine in large corporations. According to McKinsey, 78% of global organizations already use AI in at least one business function. The sectors leading this adoption are technology, telecommunications, finance, and retail, with active usage rates between 30% and 40%, in addition to pilot tests and prototypes.
After thirty years devoted to economics and policy making, I made one of the most stimulating choices of my professional life: to come to Wharton, the business school at the University of Pennsylvania, to study the new frontiers of management and leadership.
One of the topics that has captured my most attention so far is the use and implementation of artificial intelligence in business — an approach that blends technology, strategy, and the psychology of work. My challenge has been to connect this organizational perspective with what economics teaches about productivity, progress, and inequality.
As Professor Stefano Puntoni has shown, AI adoption, though broad, still lacks clear metrics. When he asked executives how many were tracking the return on investment in GenAI, almost no one raised their hand. The race for efficiency is real, the learning about its effects, not so much.
This imbalance reveals a central point: organizations will decide whether to use technology to replace people or to amplify their capabilities. The first option yields quick and superficial gains; the second builds lasting innovation and trust.
As Daron Acemoglu and Simon Johnson remind us in Power and Progress, technological advancement has never been an automatic synonym for prosperity. Over the past thousand years, economic power has repeatedly reorganized itself around machines — and not always in favor of the majority. When technology concentrates control, it widens inequality; when it spreads opportunity, it fosters well-being and growth. Progress is neither linear nor neutral: it is the product of human and institutional decisions.
Today, that choice is being repeated. Generative AI is redefining tasks and roles, but it is also challenging the very meaning of work. By automating what once expressed human talent, it can erode the sense of competence. By imposing standardized processes, it limits autonomy. And by replacing human interaction with algorithms, it weakens belonging. These are three dimensions that, according to work psychology, sustain well-being and productivity. AI may also shorten the demanding path of learning, the one that develops instinct, judgment, and experience. Efficiency should not replace maturity.
The evidence is eloquent. Studies by Puntoni and others show that employees evaluated by algorithms display less empathy and willingness to help colleagues. Statistical tests suggest that consumers, in turn, value products more when they perceive human participation in their creation. Technology is the same; what changes is the design of interaction. Decisions about AI occur across multiple layers of governance — from teams to boards, from algorithms to regulation. Making these choices explicit is essential for culture to translate into ethical standards and measurable outcomes.
Yet AI can also open a new cycle of opportunities. As Professor Christian Terwiesch has shown, generative AI can transform the innovation process itself. Rather than concentrating decisions in a few hands, it enables more people to create, test, and propose AI-supported solutions, what he calls “innovation tournaments.” In this context, the human role becomes that of a curator, not an executor.
GenAI helps generate more and better ideas, and faster, but it still needs people to give them direction, purpose, and judgment. Companies that learn to balance these forces – the machine’s speed and human discernment – will open new frontiers of competitive advantage.
The future of innovation, therefore, does not belong only to technology but to the culture that surrounds it. Companies that combine structured experimentation with freedom to imagine will create not just products but meaning. And perhaps that is the true promise of AI: to give people back the time and energy to think, create, and reinvent what progress truly ought to be.
The pace of change is fast, and policies for a just transition are urgently needed to support those most affected and to create conditions for continuous relearning.
Ultimately, however, these choices will not occur in a vacuum. They will be shaped by economic incentives, public policies, and the way markets value – or fail to value – investment in people. If the system rewards only short-term productivity gains, AI adoption will tend toward substitution and concentration. If, instead, it values human capital and the diffusion of knowledge, technological progress can become a new engine of shared prosperity. The economic balance of this new era will lie in combining productivity with human development, efficiency with learning.
The challenge, therefore, is not only for companies but also for regulation: ensuring that technological progress advances in step with economic and social progress. Artificial intelligence can generate abundance, or inequality. What will determine the outcome is not the algorithm itself, but the network of choices, incentives, and values we build around it.