Zoom CEO envisions AI deepfakes attending meetings in your place

admin


Woman discussing work on video call with team members at office

Zoom CEO Eric Yuan has a vision for the future of work: sending your AI-powered digital twin to attend meetings on your behalf. In an interview with The Verge’s Nilay Patel published Monday, Yuan shared his plans for Zoom to become an “AI-first company,” using AI to automate tasks and reduce the need for human involvement in day-to-day work.

“Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process,” Yuan said in the interview. “We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs.”

LLMs are large language models—text-predicting AI models that power AI assistants like ChatGPT and Microsoft Copilot. They can output very convincing human-like text based on probabilities, but they are far from being able to replicate human reasoning. Still, Yuan suggests that instead of relying on a generic LLM to impersonate you, in the future, people will train custom LLMs to simulate each person.

“Everyone shares the same LLM [right now]. It doesn’t make any sense. I should have my own LLM — Eric’s LLM, Nilay’s LLM. All of us, we will have our own LLM,” he told The Verge. “Essentially, that’s the foundation for the digital twin. Then I can count on my digital twin. Sometimes I want to join, so I join. If I do not want to join, I can send a digital twin to join. That’s the future.”

Yuan thinks we’re five or six years away from this kind of future, but even the suggestion of using LLMs to make decisions on someone’s behalf is enough to have some AI experts frustrated and confused.

“I’m not a fan of that idea where people build LLM systems that attempt to simulate individuals,” wrote AI researcher Simon Willison recently on X, independently of the news from Yuan. “The idea that an LLM can usefully predict a response from an individual seems so obviously wrong to me. It’s equivalent to getting business advice from a talented impersonator/improv artist: Just because they can ‘sound like’ someone doesn’t mean they can provide genuinely useful insight.”

In the interview, Patel pushed back on Yuan’s claims, saying that LLMs hallucinate, drawing inaccurate conclusion, so they aren’t a stable foundation for the vision Yuan describes. Yuan said that he’s confident the hallucination issue will be fixed in the future, and when Patel pushed back on that point as well, Yuan said his vision would be coming further down the road.

“In that context, that’s the reason why, today, I cannot send a digital version for myself during this call,” Yuan told Patel. “I think that’s more like the future. The technology is ready. Maybe that might need some architecture change, maybe transformer 2.0, maybe the new algorithm to have that. Again, it is very similar to 1995, 1996, when the Internet was born. A lot of limitations. I can use my phone. It goes so slow. It essentially does not work. But look at it today. This is the reason why I think hallucinations, those problems, I truly believe will be fixed.”

Patel also brought up privacy and security implications of creating a convincing deepfake replica of yourself that others might be able to hack. Yuan said the solution was to make sure that the conversation is “very secure,” pointing to a recent Zoom initiative to improve end-to-end encryption (a topic, we should note, the company has lied about in the past). And he says that Zoom is working on ways to detect deepfakes as well as create them—in the form of digital twins.



Source link

Leave a comment