More security flaws found in popular AI chatbots — and they could mean hackers can learn all your secrets

[ad_1]

If a hacker can monitor the internet traffic between their target and the target’s cloud-based AI assistant, they could easily pick up on the conversation. And if that conversation contained sensitive information – that information would end up in the attackers’ hands, as well.

This is according to a new analysis from researchers at the Offensive AI Research Lab from Ben-Gurion University in Israel, who found a way to deploy side channel attacks on targets using all Large Language Model (LLM) assistants, save for Google Gemini. 

[ad_2]

Source Article Link

Leave a Comment