
BLOG
BLOG
Brilliant AI, broken defenses?
AI-powered apps are revolutionizing how we search, learn, and communicate, but the rapid pace of innovation has come at a cost: security is often an afterthought.
As part of our AI App Security Analysis Series, we’ve been scrutinizing some of the most popular AI tools on Android for hidden vulnerabilities that could put millions of users at risk.
After revealing major security flaws in DeepSeek and Perplexity AI, our latest deep dive focuses on ChatGPT’s Android app—one of the most downloaded AI apps globally. Despite the sophistication of the AI under the hood, the mobile app’s security posture is alarmingly weak.
No, not really.
When we decided to test the ChatGPT Android app, we assumed we’d be in for a different kind of audit. After all, this wasn’t a small team racing to ship the next big thing—this was OpenAI. Backed by billions, powered by the most sophisticated language model on the planet, and downloaded by millions. If anyone had the resources to build a secure mobile app, it was them.
Instead, what we found was surprisingly risky—even for a company leading the AI race.
Despite the intelligence behind the scenes, the app’s security posture was riddled with issues we’ve seen time and again in this series. Old vulnerabilities. Missing controls. Zero runtime defense.
In short, the AI might be brilliant, but the mobile app? Not so much. We expected better from ChatGPT.
Our static and dynamic analysis of the ChatGPT Android app (v1.2025.133) revealed multiple medium to high-risk vulnerabilities, including:
Attack type
Credential exposure
Risk level
Critical
We discovered hardcoded Google API keys embedded in the app’s code. These can be easily extracted and misused, allowing attackers to impersonate requests or interact with backend systems.
How could ChatGPT fix it?
Attack type
Impersonation attack
Risk level
Critical
The app does not implement SSL certificate pinning. This makes it vulnerable to man-in-the-middle (MitM) attacks, where an attacker intercepts and manipulates data in transit.
How could ChatGPT fix it?
Attack type
Privilege escalation
Risk level
High
ChatGPT runs normally on rooted devices, leaving it open to escalated privileges, system-level tampering, and data extraction.
How could ChatGPT fix it?
We identified exposure to multiple high-profile Android vulnerabilities:
Attack type
APK modification and malware injection
Risk level
Critical
Allows attackers to inject code into signed APKs.
Attack type
Phishing and identity theft
Risk level
Critical
Enables malicious apps to hijack UI screens and steal credentials.
Attack type
UI manipulation
Risk level
High
Tricks users into interacting with hidden UI elements.
How could ChatGPT fix these vulnerabilities?
Attack type
UI manipulation
Risk level
High
The app doesn’t attempt to detect Frida/Xposed frameworks or block use in debug/ADB-enabled environments, making it easy to tamper with runtime behavior.
How could ChatGPT fix this vulnerability?
These aren’t just theoretical risks. Attackers love this stuff because it works.
Across this series, we tested three of the most talked-about AI apps: DeepSeek, Perplexity, and now ChatGPT. The names differed, but the security story remained frustratingly similar.
We didn’t go into this looking to find fault. We wanted to understand how secure AI really is in your pocket. What we uncovered was a clear pattern of rushed releases and missed fundamentals.
App |
Hardcoded Secrets |
SSL Pinning |
Root Detection |
Hooking Detection |
Android Vulnerabilities |
DeepSeek |
✅ |
❌ |
❌ |
❌ |
Tapjacking |
Perplexity |
❌ |
❌ |
❌ |
❌ |
Tapjacking |
ChatGPT |
✅ |
❌ |
❌ |
❌ |
Janus, StrandHogg, Tapjacking |
Whether it's hardcoded secrets, lack of SSL pinning, or absence of runtime defenses, each of these apps missed the mark in critical areas. These aren’t edge cases—they’re table stakes in mobile app security.
As AI apps rush to redefine productivity, education, and creativity, the infrastructure powering them, especially on mobile, must be just as robust. The current state of AI app security tells us we’re not there yet.
Rishika Mehrotra, Chief Strategy Officer, Appknox, believes that:
The AI revolution needs a security revolution alongside it. Innovation without protection isn’t just a risk—it’s a liability. Through this series, we set out to spark a conversation—not just about what’s broken, but about what needs to change. I hope it’s served as both a wake-up call and a roadmap.
Act before attackers exploit unnoticed security gaps.
Take control of your app security before it is exposed to silent, sophisticated threats that can compromise your data, reputation, and bottom line.
Sign up for a free trial today and see how Appknox helps secure entire app portfolios with its holistic, binary-based scanning. Join 100+ global enterprises who vouch for Appknox.