MIT Professor Warns AI Can Give Financial Advice — But 'Has No Teeth' To Protect You
Experts warn about the lack of fiduciary duty in AI financial advice, despite its growing popularity among younger investors.

Ask most investors whether they would trust a chatbot with their retirement savings, and the answer, overwhelmingly, is no. Ask whether they have already used one for financial advice, and a surprising number say yes.
That contradiction sits at the heart of a warning from one of the most prominent figures in financial engineering. Andrew Lo, director of the Laboratory for Financial Engineering at the MIT Sloan School of Management, told CNBC on Monday that AI already has the financial knowledge to replace human advisers. What it does not have is a legal obligation to act in anyone's best interest.
'What they don't have is that fiduciary duty,' Lo said. 'They don't have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.'
He described the entire principle of putting a client's interests first as having 'no teeth' without personal liability behind it, PYMNTS wrote. A human adviser who breaches fiduciary duty faces regulatory sanctions, civil lawsuits, and potentially criminal charges. An AI system faces nothing.
Millions Seeking AI Financial Advice With No Legal Safety Net
The numbers are difficult to square. An Intuit Credit Karma poll published in September found that 66 per cent of Americans who have used generative AI turned to it for financial advice. Among millennials and Generation Z, the figure was 82 per cent.
The platforms they use, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini, owe their users nothing in a legal sense. The answers come fast. They sound authoritative. And if they are wrong, there is no recourse.
Sebastian Benthall, a senior research fellow at New York University School of Law's Information Law Institute, said the question of accountability remains wide open. 'Who's really responsible, and can people really be relying on a product to do this if it's not being backed up by a corporation with a fiduciary duty?' he told PYMNTS. 'It's really unresolved.'
Lo was careful to note AI does serve a purpose in financial literacy, calling it 'really good' at breaking down topics most people find baffling, like how Medicare works. But explaining a concept is not the same as telling someone what to do with their savings.
The regulatory environment has only made things worse. A Biden-era Department of Labour fiduciary rule, designed to cover advisers recommending 401(k) rollovers, collapsed after the Trump administration stopped defending it in court, per CNBC. That has left a growing number of intermediaries handling significant sums with no fiduciary standard at all.
Investors Want Better Advisers, Not a Replacement

A February 2026 report from Cerulli Associates found that only 38 per cent of affluent investors described themselves as even 'somewhat' comfortable with AI in financial services, SmartAsset reported. Those under 50 were more open to it, with over 60 per cent expressing support, but among investors aged 70 and over, the number fell to 16 per cent.
Northwestern Mutual's 2025 Planning and Progress Study painted a clearer picture. Investors were roughly three times more likely to prefer a human adviser to an AI system. But nearly half said what they actually wanted was a human who understands AI and uses it as a tool. The appetite is for augmentation, not replacement.
The industry appears to agree, at least for now. Charles Schwab's 2026 Advisor AI in Action study found 63 per cent of registered independent advisers have adopted AI in some form, but most limit it to administrative tasks like note-taking and email drafts. Only one in ten firms told Schwab they had fully woven the technology into how they operate, SmartAsset noted.
Not every human adviser is held to a fiduciary standard, either. The legal obligations depend on whether someone is a registered investment adviser, a stockbroker, or an insurance agent. But Lo's core point is a blunt one. A human professional who gives bad advice can be sued, fined, or stripped of a licence. A chatbot just generates the next response.
© Copyright IBTimes 2025. All rights reserved.
























