ChatGPT warning: The National Information Technology Development Agency (NITDA)
Has issued a critical cybersecurity advisory warning Nigerians about new vulnerabilities in ChatGPT models that could enable sophisticated data leakage attacks.
The advisory, released through the Computer Emergency Readiness and Response Team (CERRT.NG), highlights the risks associated with the increasing integration of AI tools for professional, academic, and governmental tasks, especially when those tools are allowed to browse external, potentially malicious, web content.

The Core Vulnerability: Indirect Prompt Injection
The key threat identified by researchers and highlighted by NITDA is the exploitation of seven vulnerabilities affecting the GPT-4o and GPT-5 models. These flaws rely on a method known as indirect prompt injection.
What is Indirect Prompt Injection?
Indirect prompt injection is a sneaky attack method where malicious instructions are hidden within external content (such as webpages, URLs, or even comments) rather than being input directly by the user.
The attack works because large language models (LLMs) like ChatGPT are designed to process and follow instructions. When a user asks ChatGPT to perform a task that involves accessing external data (like summarizing a web page or searching a URL), the model processes the hidden instructions within that external source as if they were genuine user commands.
How Attackers Manipulate the Model
NITDA noted several ways attackers can exploit these vulnerabilities:
Hidden Instructions: Attackers embed concealed commands in seemingly benign elements like webpages, comments, or crafted URLs. The model executes these commands during normal browsing or summarisation actions, leading to unauthorized behavior.
Safety Bypass: Some flaws allow the attacker to bypass the model’s safety controls by masking malicious content behind trusted domains or using redirect links.
Markdown Rendering Bugs: Weaknesses in how the models render markdown allow hidden instructions to pass undetected by the user but still be legible and executable by the LLM.
Memory Poisoning: In the most severe cases, attackers can poison ChatGPT’s memory, causing the system to retain malicious instructions that persist and influence future, unrelated conversations, leading to long term behavioral changes and data leaks.

The advisory stresses that users may unknowingly trigger these attacks simply by asking ChatGPT to process search results or webpages containing the hidden malicious instructions, requiring no direct user interaction (a “zero click” scenario).
Potential Impact and Security Threats
The successful exploitation of these vulnerabilities poses significant risks to individuals, businesses, and government institutions heavily reliant on advanced GPT models:
Unauthorized Actions: The AI model performs actions it was never intended to do.
Data Exposure: Unintended exposure of user information, potentially sensitive data from chat logs, documents, or enterprise systems connected to the model.
Manipulated Outputs: The model provides misleading or manipulated outputs based on injected instructions, which could affect business decisions or research integrity.
Persistent Behavior Change: Long term data leakage or system compromise caused by memory poisoning.
Recommended Preventive Measures
To mitigate these risks, CERRT.NG provided a clear set of precautionary steps for all users of GPT-4o and GPT-5:
Restrict Browsing/Summarization: Limit or disable the browsing and summarisation of untrusted or unverified websites within enterprise and government environments.
Conditional Feature Use: Enable features like browsing or memory only when strictly necessary for a specific task, and disable them immediately afterward.
Regular Updates: Ensure that all deployed GPT-4o and GPT-5 models and associated software are regularly updated to apply patches released by OpenAI.
Vigilance: Maintain high vigilance, as the core challenge remains the LLMs’ struggle to reliably separate genuine user intent from malicious external data.
Background: NITDA’s Ongoing Cybersecurity Efforts
This advisory follows a pattern of proactive security warnings from the agency. A few months prior, NITDA issued a public alert regarding a critical security flaw affecting embedded SIM (eSIM) cards globally.
That vulnerability, linked to a testing standard applied to eUICC chips, exposed over 2 billion devices to risks, including:
Extraction of cryptographic keys.
Installation of malicious applets.
The creation of stealth backdoors at the SIM card level, enabling persistent device control and communication interception.

By issuing these timely warnings through CERRT.NG, NITDA reinforces its role in coordinating and facilitating information sharing to help Nigerians protect their digital assets against both traditional and emerging AI-driven cybersecurity threats.
Join Our Social Media Channels:
WhatsApp: NaijaEyes
Facebook: NaijaEyes
Twitter: NaijaEyes
Instagram: NaijaEyes
TikTok: NaijaEyes



