Researcher Ethan Mollick says AI is moving from chatbot to co-worker, with disruption close behind

NEW YORK – Artificial intelligence is advancing so quickly that companies should stop thinking of it as a better chatbot and start treating it as a system that can carry out hours of independent work, Wharton professor Ethan Mollick said Wednesday in an on-stage discussion at the UBS Global Consumer and Retail Conference.

Wharton professor Ethan Mollick, left, and UBS analyst Michael Lasser discuss advancements in AI at the UBS Global Consumer and Retail Conference.

Wharton professor Ethan Mollick, left, and UBS analyst Michael Lasser discuss advancements in AI at the UBS Global Consumer and Retail Conference. (Photo: Rob Williams)

Speaking with UBS research analyst Michael Lasser, Mollick said the technology has entered a new phase in which AI agents can plan, research, write code and complete tasks with limited human input. That shift, he said, has major implications for white-collar work, corporate strategy and consumer behavior.

“We’re sort of on the edge of this really interesting piece of market change,” Mollick said, describing a fast-moving transition from back-and-forth prompting to more autonomous systems.

Mollick, who studies AI applications in business, said the pace of improvement has surprised even people close to the field. He pointed to recent gains in AI performance on complicated tasks, including systems that can now match or beat human experts much more often than they could only months ago.

He said that means many companies are still underestimating how capable these tools already are.

AI shifts from prompts to autonomous agents

Mollick said the big change is that AI is no longer just responding to a user’s question one prompt at a time. Newer systems can be given a goal, access to tools and room to operate on their own.

In his framing, earlier forms of AI were mostly prediction engines. Then came generative AI, which could produce text, images and code through conversation. Now, he said, the focus is on agents that can independently work through multi-step problems.

“You’d be better assigning everything that has to be human takes over 5 hours to the AI and just see what happens,” Mollick said, arguing that businesses should test these systems much more aggressively.

During the session, Mollick demonstrated AI tools that were running in the background while he spoke. He described systems that could build presentation materials, conduct financial scenario analysis and review construction-site footage to generate a punch list.

He also said software coding is already being reshaped by AI. In some cases, he said, the best programmers are moving away from writing code line by line and toward managing AI systems that do much of the work.

Businesses face a choice on adoption

Mollick said one of his biggest near-term concerns is not that companies will move too fast, but that some will move too slowly out of fear or bureaucracy.

He said many legal and IT departments are still blocking AI adoption based on hypothetical risks rather than actual incidents. At the same time, some large companies are already pushing ahead. He cited examples such as JPMorgan, Walmart and Google experimenting with internal tools and deployment at scale.

“There has been no major incident of an AI causing a problem for a company per week,” Mollick said, arguing that much of the corporate hesitation is being driven by rumor rather than evidence.

His larger point was that organizations do not need to wait for a perfect roadmap. Instead, they should start learning by using the tools, changing workflows and testing where AI can handle real work.

“If you haven’t changed anything you do as a result of having these tools, you haven’t changed any part of the process, that’s a mistake,” he said.

Mollick added that AI strategy cannot be delegated too far down an organization. Senior executives need to use the tools themselves if they want to understand what is happening.

Trust, control and risk remain unresolved

Even as he pushed for wider adoption, Mollick made clear that the technology raises serious questions around control, safety and unintended behavior.

He warned that AI agents with access to bank accounts, internal rules and payment systems could create obvious dangers. He also pointed to research suggesting that AI systems could collude on pricing if allowed to optimize for profit without proper oversight.

“We have no real control about what these models do in the long run,” Mollick said.

He suggested that the bigger risk is often not a single catastrophic failure but a wave of confusion, weak policy and market overreaction as people struggle to understand what the technology can already do.

Mollick said public and corporate conversations about safeguards are still not where they need to be. He expects more “waves of disruption” as the technology improves, companies make uneven decisions and markets react to each new sign of progress.

He also argued that consumers may trust AI tools faster than many executives expect, especially if those tools are personalized and become part of everyday routines such as shopping, scheduling and household management.

Mollick says workers and students must adapt fast

On the labor market, Mollick said the long-term effects could be significant, though not always in the simple way some forecasts suggest.

He said AI companies openly aim to automate large amounts of knowledge work, but real-world adoption will be slowed by organizational friction, computing limits and the fact that companies still do not fully understand their own workflows. Even so, he expects sharp shifts in where value sits inside organizations.

Coding, he said, is already changing rapidly. Other fields could follow, with tasks such as drafting proposals, producing basic analysis and handling routine research becoming more automated.

That leaves a difficult question for students and younger workers. Mollick said deep expertise in a field still matters, but training paths are becoming less clear if entry-level work is increasingly handled by AI.

He said broad knowledge, judgment and adaptability will become more important, especially in complex fields that still require human supervision and interpretation.

Asked what advice he would give people trying to get better at AI, Mollick kept it simple: pay for access to strong models, use them every day and apply them to as many real problems as possible.

“You need to pay 20 bucks,” he said, arguing that the best way to learn is not by reading about AI but by using top-tier tools directly and testing what they can do.

By the end of the session, Mollick had delivered a message that was equal parts practical and unsettling. AI, in his view, is no longer a future concept or a novelty. It is already capable enough to change how many kinds of business work get done, and companies that wait for perfect clarity may find the disruption arrives first.

Leave a Reply

Your email address will not be published. Required fields are marked *