Abeerah Hashim
2024-09-01 07:59:35
latesthackingnews.com
A security researcher discovered numerous vulnerabilities in Microsoft Copilot that could expose users’ personal information, allowing data theft. Microsoft patched the vulnerabilities following the bug report. Nonetheless, the exact mitigation strategy remains unclear.
Numerous Microsoft Copilot Vulnerabilities Could Leak Data
The researcher Johann Rehberger recently shared insights about serious security issues with Microsoft’s AI flagship, Copilot. Rehberger discovered numerous vulnerabilities that could allow an adversary to steal data by injecting malicious prompts into Microsoft Copilot.
Specifically, the researcher demonstrated ASCII smuggling to inject malicious prompts into an AI model. Since prompt injection attacks remain a problem for AI models’ security, Copilot is no different, being similarly vulnerable to such attacks.
In his attack strategy, the researcher used Unicode characters mirroring ASCII that were invisible in the user interface. So, while the user won’t see those characters, the LLM would still read them and respond accordingly. For this, the attacker may input such characters through various means, such as by embedding within clickable hyperlinks. Clicking on such links would send data to third-party servers, allowing data exfiltration.
To inject such malicious prompts, the attacker could trick Copilot via maliciously crafted emails or documents. Upon processing such documents, Copilot would follow the prompts within the document and would generate the relevant output, allowing data theft.
The researcher shared the following video as the proof-of-concept, sharing the technical details in his write-up.
Rehberger discovered the vulnerability in January 2024, following which he reported the matter promptly to Microsoft. In response, Microsoft, following numerous communications in the following months, patched the vulnerabilities.
It remains unclear how Microsoft addressed these issues to prevent data exfiltration. The tech giant didn’t share any details about the patch despite Rehberger’s request. However, the researcher advised Microsoft to prevent the automatic invoking of tools following malicious prompts and not render hidden characters and clickable hyperlinks. The end results of Microsoft patches demonstrate the same.
Earlier, Microsoft also patched an SSRF flaw in Copilot. Exploiting that vulnerability could expose sensitive information from a firm’s internal network.
Let us know your thoughts in the comments.
Support Techcratic
If you find value in Techcratic’s insights and articles, consider supporting us with Bitcoin. Your support helps me, as a solo operator, continue delivering high-quality content while managing all the technical aspects, from server maintenance to blog writing, future updates, and improvements. Support innovation! Thank you.
Bitcoin Address:
bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge
Please verify this address before sending funds.
Bitcoin QR Code
Simply scan the QR code below to support Techcratic.
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.