Chinese scientists’ attack on ChatGPT shows how criminals can exploit AI


Researchers in Beijing have found vulnerabilities in commercial AI models, including ChatGPT, that could allow them to be manipulated by doctored images, potentially failing to filter out harmful content. The findings highlight security concerns within AI and the need for stronger technical risk controls. The research team identified two types of adversarial attacks against multimodal large language models (MLLMs): image feature attack and text description attack. The study underscores the potential for malicious use of AI and the need for improved defence mechanisms.

Read more at South China Morning Post…