Aman Mishra
2025-03-04 11:27:00
gbhackers.com
In a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed “LLMjacking.”
These attacks exploit non-human identities (NHIs) machine accounts and API keys to hijack access to large language models (LLMs) hosted on cloud platforms like AWS.
By compromising NHIs, attackers can abuse expensive AI resources, generate illicit content, and even exfiltrate sensitive data, all while leaving victims to bear the financial and reputational costs.
Recent research by Entro Labs highlights the alarming speed and sophistication of these attacks.
In controlled experiments, researchers deliberately exposed valid AWS API keys on public platforms such as GitHub and Pastebin to observe attacker behavior.


The results were startling: within an average of 17 minutes and as quickly as 9 minutes threat actors began reconnaissance efforts.
Automated bots and manual attackers alike probed the leaked credentials, seeking to exploit their access to cloud AI models.
Reconnaissance and Exploitation Tactics
The attack process is highly automated, with bots scanning public repositories and forums for exposed credentials.
Once discovered, the stolen keys are tested for permissions and used to enumerate available AI services.
In one instance, attackers invoked AWS’s GetFoundationModelAvailability
API to identify accessible LLMs like GPT-4 or DeepSeek before attempting unauthorized model invocations.
This reconnaissance phase allows attackers to map out the capabilities of compromised accounts without triggering immediate alarms.
Interestingly, researchers observed both automated and manual exploitation attempts.
While bots dominated initial access attempts using Python-based tools like botocore
manual actions also occurred, with attackers using web browsers to validate credentials or explore cloud environments interactively.
This dual approach underscores the blend of opportunistic automation and targeted human intervention in LLMjacking campaigns.
Financial and Operational Impact
According to the Report, The consequences of LLMjacking can be severe.
Advanced AI models charge significant fees per query, meaning attackers can quickly rack up thousands of dollars in unauthorized usage costs.
Beyond financial losses, there is also the risk of malicious content generation under compromised credentials.
For example, Microsoft recently dismantled a cybercrime operation that used stolen API keys to abuse Azure OpenAI services for creating harmful content like deepfakes.
To counter this emerging threat, organizations must adopt robust NHI security measures:
- Real-Time Monitoring: Continuously scan for exposed secrets in code repositories, logs, and collaboration tools.
- Automated Key Rotation: Immediately revoke or rotate compromised credentials to limit exposure time.
- Least Privilege Access: Restrict NHIs to only essential permissions, reducing the potential impact of a breach.
- Anomaly Detection: Monitor unusual API activity, such as unexpected model invocations or excessive billing requests.
- Developer Education: Train teams on secure credential management practices to prevent accidental leaks.
As generative AI becomes integral to modern workflows, securing NHIs against LLMjacking is no longer optional but essential.
Organizations must act swiftly to safeguard their AI resources from this rapidly evolving threat landscape.
Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.
Keep your files stored safely and securely with the SanDisk 2TB Extreme Portable SSD. With over 69,505 ratings and an impressive 4.6 out of 5 stars, this product has been purchased over 8K+ times in the past month. At only $129.99, this Amazon’s Choice product is a must-have for secure file storage.
Help keep private content private with the included password protection featuring 256-bit AES hardware encryption. Order now for just $129.99 on Amazon!
Help Power Techcratic’s Future – Scan To Support
If Techcratic’s content and insights have helped you, consider giving back by supporting the platform with crypto. Every contribution makes a difference, whether it’s for high-quality content, server maintenance, or future updates. Techcratic is constantly evolving, and your support helps drive that progress.
As a solo operator who wears all the hats, creating content, managing the tech, and running the site, your support allows me to stay focused on delivering valuable resources. Your support keeps everything running smoothly and enables me to continue creating the content you love. I’m deeply grateful for your support, it truly means the world to me! Thank you!
BITCOIN bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge Scan the QR code with your crypto wallet app |
DOGECOIN D64GwvvYQxFXYyan3oQCrmWfidf6T3JpBA Scan the QR code with your crypto wallet app |
ETHEREUM 0xe9BC980DF3d985730dA827996B43E4A62CCBAA7a Scan the QR code with your crypto wallet app |
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.