AI has a big problem when it comes to financial advice: MIT professor


Insta_photos | Istock | Getty Images

The financial capability of artificial intelligence platforms is improving to the extent that it will likely be able to replace human financial advisors in the future, according to finance experts.

However, AI has a major drawback relative to human advisors: a lack of fiduciary duty, they said. And a resolution to that legal gray area doesn’t seem near at hand, they said.

A fiduciary duty is a legal obligation that many financial advisors — and professionals in other fields, such as lawyers and doctors — owe their clients. It essentially means they will put their clients’ best interest ahead of their own.

“The problem that we have to solve is not whether AI has enough expertise,” said Andrew Lo, a finance professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “The answer right now is, clearly, AI has the [financial] expertise.”

“What they don’t have is that fiduciary duty,” Lo said. “They don’t have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.”

An advisor who violates their fiduciary responsibility can be subject to fairly serious consequences, including regulatory penalties, civil liabilities and criminal charges, Lo said.

The notion of putting a client’s interest ahead of yours “has no teeth” without responsibility or legal liability, he said.

An ‘unresolved’ legal question

I used an AI tool to do my taxes—here's where experts say I went wrong

Read more CNBC personal finance coverage

About 85% of respondents who have used GenAI for financial advice acted on the recommendations provided, according to the survey, which polled 1,019 adults.

“People are looking to these services for all sorts of advice, and they’re getting it, and it seems to be a big open regulatory question,” said Sebastian Benthall, a senior research fellow at New York University School of Law’s Information Law Institute.

“Who’s really responsible, and can people really be relying on a product to do this if it’s not being backed up by a corporation with a fiduciary duty?” Benthall said. “It’s really unresolved.”

Why you shouldn’t blindly trust AI — or humans

That said, there are some good use cases for AI in financial planning, Lo said.

AI is “really good” at providing resources online for various financial concepts that typical people don’t understand, Lo said. For example, if someone were to seek answers to basic questions about Medicare, AI can generally provide a reliable overview, he said.

While AI’s output is sophisticated in many financial respects, consumers generally shouldn’t blindly trust answers to questions about their own household finances, Lo said.

They don’t have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.

Andrew Lo

finance professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management

James Burnham, a legal and government affairs official at Elon Musk’s xAI, said in a social media post in March that the company’s AI platform, Grok, “is not tax advice so always confirm yourself too.”

Of course, many human financial advisors provide advice to clients, and it is then up to the client to decide whether to implement it.

“I think that’s the way that I would look at LLMs: They can be very, very useful in providing different options and in describing how those options might work, but you should always remember that the advice that they can give you could be wrong,” Lo said.

“But I would argue that that’s true with human financial advisors as well,” he said.

Not all human advisors are fiduciaries

Sdi Productions | Istock | Getty Images

AI is creating more jobs than it's killing, says Perella Weinberg's Walter Isaacson

Benthall, of New York University, proposed a similar legal predicament regarding AI advice: Since AI giants right now are largely U.S.-based, if an AI were to suggest that investors put their retirement savings into U.S. stocks, that advice could be viewed as self-dealing, or a financial conflict of interest.

That said, companies that provide AI services don’t appear to receive compensation for their advice to retail investors, and therefore aren’t fiduciaries, said Jiaying Jiang, an associate law professor at the University of Florida Levin College of Law who is researching AI and fiduciary duty.

Who’s really responsible, and can people really be relying on a product to do this if it’s not being backed up by a corporation with a fiduciary duty? It’s really unresolved.

Sebastian Benthall

senior research fellow at New York University School of Law’s Information Law Institute

However, financial advisors who owe a fiduciary duty to clients could violate that duty by using AI, Jiang said.

For example, if an advisor uses AI to give a certain recommendation to a client, but that recommendation isn’t in the client’s best interest, it is the advisor — and not the company backing the AI platform — that would be liable, Jiang said.

Ultimately, Lo said he thinks government policy needs to change to provide fiduciary protections for consumers who get financial advice from AI.

Until then, “we’re not going to get to the point where we can fully delegate these [financial] decisions,” Lo said.

“But I do believe that that will eventually happen,” he said.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *