11 min read

AI Will Amplify What People Can Achieve Together

A futuristic office scene showing a diverse team working alongside a humanoid AI robot at a table with laptops and digital dashboards. A confident woman stands in the foreground observing, while a glowing world map and data visualisations fill the background, symbolising global reach and AI-powered teamwork.
Written by
Oscar Harding
Published on

The real leap will come when people and AI raise the ceiling on what teams can do together

For the last few years, the loudest AI debate has been about substitution. Will AI replace writers, coders, analysts, designers, assistants, support teams, and eventually whole layers of knowledge work? That question matters, but it is too narrow. The more important story is what happens when AI does not simply stand in for a person, but instead amplifies what groups of people can achieve together. That is where things start to get interesting, and where the long term upside looks much bigger than the one person productivity story.

The case for this is not just marketing fluff. A growing body of research is moving in the same direction. Stanford’s 2025 AI Index says evidence continues to build that AI boosts productivity and often helps narrow skill gaps across the workforce. Microsoft’s 2025 Work Trend Index says businesses are already looking at AI and agents as a way to expand capacity because leaders want more output, while workers are running out of time and energy. In plain English, organisations are hitting human limits, and AI is being pulled in not only as a cheaper worker, but as a capacity multiplier.

The Team Story Matters More Than The Solo Story

The easiest way to think about AI is as a superpowered assistant for individuals. That frame makes sense because it is visible. One person writes faster. One person summarises faster. One person builds a presentation faster. One person learns quicker. But the highest value work in business, science, media, engineering, logistics, government, and healthcare is rarely a solo act. It happens in teams. It depends on handoffs, shared memory, coordination, priorities, and decisions being made across more than one brain. That is exactly why collective intelligence researchers are paying so much attention to AI right now.

A 2025 editorial on AI for collective intelligence argued that AI can strengthen three things that matter inside groups: collective memory, collective attention, and collective reasoning. In other words, AI can help teams remember more of what they know, focus better on what matters, and reason through problems more effectively together. That is a much richer way to understand the technology than the usual headline version of “AI writes emails now.” The real opportunity is that AI can become a layer that helps teams surface hidden expertise, reduce coordination drag, and align around better decisions.

That framing is powerful because most teams do not fail because nobody is smart. They fail because knowledge gets trapped in silos, people lose focus, meetings eat time, context is scattered, and priorities drift. AI will not magically fix bad leadership or broken culture, but it can reduce the friction that stops capable groups from performing like capable groups. It can become a connective layer. Used well, that is where the multiplier effect begins.

The Research Is More Nuanced Than The Hype

It is worth being careful here, because the data is not saying “human plus AI always wins.” In fact, some of the strongest research says the opposite. MIT Sloan summarised a large review of more than 100 studies and found that, on average, human and AI combinations did better than humans alone, but not better than AI alone. The same research found something important beneath the averages: human-AI combinations tend to work best when humans are already stronger at the task, and in creative or content generation settings. When AI is already clearly better than humans at a task, mixing humans into the loop can actually drag performance down.

That is a huge clue about the future. AI does not amplify teamwork by being bolted onto everything equally. It amplifies teams when the work is designed properly, when roles are clear, and when humans are using AI to extend judgment, creativity, exploration, or coordination rather than second guessing a machine that is already outperforming them on a narrow task. The winners will not be the teams that merely “use AI.” They will be the teams that redesign work around where human strengths and machine strengths genuinely complement each other.

This matters because too much of the current conversation is still stuck at the level of tool adoption. Companies ask whether staff are using AI. That is the wrong question. The better question is whether AI is changing how groups think and operate together. Are teams finding knowledge faster? Are they cutting meeting overload? Are they improving handoffs? Are they broadening idea generation? Are they making fewer avoidable mistakes? Are they freeing up more human energy for judgment and relationships? Those are the questions that reveal whether AI is acting as a toy, a threat, or a multiplier.

AI Can Flatten Expertise Gaps Inside Teams

One of the more intriguing findings from recent field research is that AI may change what a good team looks like. A 2025 NBER field experiment with 776 professionals at Procter & Gamble examined how generative AI reshaped teamwork around real innovation tasks. The study focused on performance, expertise sharing, and social engagement, which is a far more realistic business setting than a toy benchmark. Based on the paper summary, the researchers found that AI changed the core pillars of collaboration rather than simply making individuals faster in isolation.

That point should not be underestimated. If AI can help less experienced people perform more like stronger contributors, or help specialists bridge gaps across functions more easily, then the benefit is not just productivity. It is organisational elasticity. Teams become less brittle. Smaller groups can punch above their weight. More people can contribute meaningfully to complex work without needing years to master every technical layer first. That does not eliminate expertise, but it can widen access to it.

We have already seen a version of this in earlier work. The well-known NBER study on generative AI in customer support found that access to AI assistance boosted productivity, with especially strong gains for less experienced and lower skilled workers, suggesting the technology can help diffuse best practices. That is not just an individual labor market story. In team terms, it implies AI can raise the floor of contribution and reduce performance gaps that often slow group output.

Workers Do Not Want Total Automation

One of the clearest warnings comes from Stanford’s 2025 work on what workers actually want from AI. Researchers surveyed 1,500 workers and found that people generally want AI to handle repetitive tasks, free time for higher value work, and improve work quality. But they also found strong resistance to full automation. The most preferred setup was a collaborative relationship: about 45.2% wanted an equal partnership between worker and AI, and 35.6% wanted human oversight at critical points. That is not a population asking to be replaced. It is a population asking for augmentation with control.

The same Stanford work also found a mismatch between what companies are automating and what workers actually want. According to the study summary, 41% of tasks fell into low priority or red light zones, meaning much of current AI implementation is either unwanted or technically not feasible. That should be a red flag for anyone who thinks shoving AI into every corner of work is the same as progress. Amplification only works when people trust the system and when the system is aimed at the right work.

This is where the opinion part gets sharper. I do not think the future belongs to firms that pursue the most automation. I think it belongs to firms that create the best human-AI operating model. That means using AI to reduce drudgery, expose insight, and accelerate decision support while keeping humans anchored in accountability, creativity, ethics, and relationship-heavy work. Companies that get this wrong will create faster confusion. Companies that get it right will create faster alignment.

The New Multiplier Is Coordination

Microsoft’s 2025 Work Trend Index offers another clue. It says 53% of leaders believe productivity must increase, while 80% of the global workforce says they lack enough time or energy to do their work. It also says 82% of leaders are confident they will use digital labor to expand workforce capacity in the next 12 to 18 months. Strip away the corporate phrasing and this means modern work is already overloaded, and AI is being treated as an answer to that overload.

But capacity is not only about speed. It is about coordination. A team can be full of smart people and still underperform because every project gets buried in fragmented documents, endless updates, duplicated effort, and scattered attention. If AI can manage context, route knowledge, summarise progress, surface blockers, and help assign the right information to the right person at the right moment, then it is not just making workers faster. It is reducing the tax that bad coordination places on the whole group.

That is why I think the phrase “AI coworker” is only half right. AI is not just a coworker. It is also becoming a coordination layer. In strong teams, that layer could quietly do some of the work that currently burns human attention: memory retrieval, context preservation, first-draft synthesis, risk flagging, workload visibility, and cross-functional translation. Those are not glamorous tasks, but they are often the difference between momentum and slowdown.

The Ceiling Rises When More People Can Build

There is another reason AI will amplify what people can achieve together: it lowers the cost of contribution. When more people can prototype, model, visualise, draft, test, or explore ideas without waiting on a specialist bottleneck, the group’s idea surface expands. That does not mean every idea is good. It means more ideas can be examined, combined, and improved. The collective intelligence paper explicitly argues that AI can expand intellectual space when used as a teammate by increasing experimentation and shortening the time, effort, and skill required to move from idea to solution.

This is potentially enormous for smaller businesses, startups, nonprofits, councils, schools, research teams, and community organisations. Historically, many groups have been limited not by the quality of their ideas but by the cost of turning those ideas into plans, content, systems, analysis, or working prototypes. AI can lower that barrier. That means better collaboration is no longer just a big-enterprise game. Small groups with clear goals and good judgment may suddenly be able to operate at a level that once required far more staff and budget.

That is one reason I suspect the next wave of AI winners will not only be giant corporations. It will also include nimble teams that understand how to combine human trust, domain knowledge, and AI leverage. The future may belong to groups that know how to orchestrate capability, not just hire it.

The Risks Are Real And Need Saying Out Loud

There is a temptation to stop here and write the glossy version. AI makes teams smarter, faster, and more creative. End of story. But the research does not support that kind of lazy optimism. The collective intelligence literature also warns that AI can narrow intellectual space, deskill workers, homogenise outputs, amplify bias, and subtly distort what teams pay attention to. In other words, the same technology that can make collaboration richer can also make it flatter and more fragile if used badly.

That risk matters because teams are easily nudged by whatever feels authoritative, fast, and available. AI outputs can become default thinking. That is especially dangerous in environments that need dissent, nuance, or local context. If every team starts from the same machine-generated frame, you can get speed at the cost of originality. You can also get a false sense of consensus. That is not amplified intelligence. That is coordinated mediocrity.

So the real challenge is not whether to bring AI into group work. It is how to do it without crushing diversity of thought. Teams need deliberate friction in the right places. They need room for human judgment, minority views, and context that the model may miss. AI should widen the conversation, not prematurely close it.

What This Means For The Future Of Work

The big picture is becoming clearer. AI will not matter most because it helps one person finish a task 20% faster. It will matter most because it changes how people combine their skills. It will change how teams learn, how they share expertise, how quickly they iterate, how they coordinate, and how much complexity they can handle together. That is the real amplification story.

The organisations that thrive will probably be the ones that stop treating AI as either a magic employee or a scary replacement machine. Instead, they will treat it as infrastructure for better collective performance. They will ask where AI improves memory, where it sharpens attention, where it supports reasoning, and where it should stay out of the way. They will redesign workflows around teams, not just around prompts.

My view is simple. The AI future worth building is not one where a few people automate away everyone else. It is one where more people can contribute at a higher level, where strong teams become more capable, and where human cooperation itself becomes more productive. If that happens, AI will not just make work faster. It will raise the ceiling on what communities, companies, and institutions can achieve together. That is a much bigger story than replacement. And it is the one worth paying attention to

Latest

Top Picks

The latest industry news, interviews, technologies, and resources.