menu
close_24px

BLOG

How Safe is the ChatGPT Android App? An Appknox Study

Is the ChatGPT app safe? Discover hidden security flaws in ChatGPT’s Android app, real risks, and expert tips to protect your data—read before you install!
  • Posted on: Jun 11, 2025
  • By Raghunandan J
  • Read time 4 Mins Read
  • Last updated on: Jun 12, 2025

Brilliant AI, broken defenses?

AI-powered apps are revolutionizing how we search, learn, and communicate, but the rapid pace of innovation has come at a cost: security is often an afterthought.

As part of our AI App Security Analysis Series, we’ve been scrutinizing some of the most popular AI tools on Android for hidden vulnerabilities that could put millions of users at risk.

After revealing major security flaws in DeepSeek and Perplexity AI, our latest deep dive focuses on ChatGPT’s Android app—one of the most downloaded AI apps globally. Despite the sophistication of the AI under the hood, the mobile app’s security posture is alarmingly weak.

Is ChatGPT safe?

 

No, not really.

When we decided to test the ChatGPT Android app, we assumed we’d be in for a different kind of audit. After all, this wasn’t a small team racing to ship the next big thing—this was OpenAI. Backed by billions, powered by the most sophisticated language model on the planet, and downloaded by millions. If anyone had the resources to build a secure mobile app, it was them.

Instead, what we found was surprisingly risky—even for a company leading the AI race.

Despite the intelligence behind the scenes, the app’s security posture was riddled with issues we’ve seen time and again in this series. Old vulnerabilities. Missing controls. Zero runtime defense.

In short, the AI might be brilliant, but the mobile app? Not so much. We expected better from ChatGPT.

Security issues in the ChatGPT Android version

Our static and dynamic analysis of the ChatGPT Android app (v1.2025.133) revealed multiple medium to high-risk vulnerabilities, including:

1. Hardcoded secrets

Attack type
Credential exposure 

Risk level
Critical

We discovered hardcoded Google API keys embedded in the app’s code. These can be easily extracted and misused, allowing attackers to impersonate requests or interact with backend systems.

How could ChatGPT fix it?

  • Store sensitive keys securely using environment variables, encrypted vaults, or secure key management services.
  • Rotate API keys regularly and immediately revoke or replace any exposed keys to minimize the risk of misuse.
  • Restrict API key access using granular permissions, IP or app restrictions, and the principle of least privilege to prevent unauthorized use.
  • Monitor and log API key usage to detect suspicious activity and respond quickly to potential abuse.
  • Follow the best secure key management practices to avoid secrets from being committed to version control.

2. No SSL pinning

Attack type
Impersonation attack 

Risk level
Critical

The app does not implement SSL certificate pinning. This makes it vulnerable to man-in-the-middle (MitM) attacks, where an attacker intercepts and manipulates data in transit.

How could ChatGPT fix it?

  • Implement SSL certificate pinning to ensure the app only communicates with trusted servers, blocking man-in-the-middle (MitM) attacks.
  • Use established libraries (like OkHttp, TrustKit, or Alamofire) or manual pinning logic to validate server certificates or public keys during every SSL/TLS handshake.
  • Regularly update and test pinned certificates or keys, and plan for certificate rotation to avoid connection failures when certificates expire or change.
  • Monitor for failed pinning attempts and log incidents to detect potential impersonation or interception attempts.

3. No root detection

Attack type
Privilege escalation 

Risk level
High

ChatGPT runs normally on rooted devices, leaving it open to escalated privileges, system-level tampering, and data extraction.

How could ChatGPT fix it?

  • Integrate robust root detection using libraries like RootBeer or SafetyNet.
  • Implement multiple, layered root checks—such as detecting the presence of su binaries, root management apps, modified system properties, and critical directory changes—to strengthen detection and minimize bypass risks.
  • Run root detection at app startup and during sensitive operations, disabling key features or blocking access if root is detected to prevent privilege escalation and tampering.
  • Regularly update and test its root detection logic to stay ahead of new rooting and bypass techniques.

4. Vulnerable to known Android attacks

We identified exposure to multiple high-profile Android vulnerabilities:

Janus (CVE-2017-13156)

Attack type
APK modification and malware injection 

Risk level
Critical

Allows attackers to inject code into signed APKs.

StrandHogg

Attack type
Phishing and identity theft 

Risk level
Critical

Enables malicious apps to hijack UI screens and steal credentials.

Tapjacking

Attack type
UI manipulation

Risk level
High

Tricks users into interacting with hidden UI elements.

How could ChatGPT fix these vulnerabilities?

  • Keep all libraries, SDKs, and dependencies up to date with the latest security patches.
  • Perform regular security testing, code reviews, and vulnerability assessments before each release.
  • Monitor app behavior in real time to detect and respond to emerging threats.
  • Store sensitive data using secure storage solutions, such as Android Keystore, and enforce strong access controls.
  • Establish a transparent vulnerability disclosure process and respond rapidly to reported issues.

5. No hooking or debug detection

Attack type
UI manipulation

Risk level
High

The app doesn’t attempt to detect Frida/Xposed frameworks or block use in debug/ADB-enabled environments, making it easy to tamper with runtime behavior.

How could ChatGPT fix this vulnerability?

  • Implement runtime checks to detect the presence of hooking frameworks such as Frida and Xposed.
  • Block app execution or restrict sensitive features if hooking tools or suspicious instrumentation are detected.
  • Detect and prevent execution in debug or ADB-enabled environments by monitoring system flags and device status.
  • Obfuscate critical code paths and use anti-tampering techniques to make runtime manipulation more difficult.
  • Regularly update detection logic to stay ahead of new hooking and debugging tools.
  • Log and alert on any suspected tampering attempts for further investigation and response.

Why this matters

These aren’t just theoretical risks. Attackers love this stuff because it works.

  • Data theft: Intercepted sessions and exposed secrets can compromise users.
  • Abuse and phishing: UI hijacking and tapjacking vulnerabilities are used in real-world fraud campaigns.
  • Trust erosion: When flagship apps fail to implement basic protections, it sends a message to the rest of the ecosystem—security is optional.

Three apps, one message

Across this series, we tested three of the most talked-about AI apps: DeepSeek, Perplexity, and now ChatGPT. The names differed, but the security story remained frustratingly similar.

We didn’t go into this looking to find fault. We wanted to understand how secure AI really is in your pocket. What we uncovered was a clear pattern of rushed releases and missed fundamentals.

App

Hardcoded Secrets

SSL Pinning

Root Detection

Hooking Detection

Android Vulnerabilities

DeepSeek

Tapjacking

Perplexity

Tapjacking

ChatGPT

Janus, StrandHogg, Tapjacking

Whether it's hardcoded secrets, lack of SSL pinning, or absence of runtime defenses, each of these apps missed the mark in critical areas. These aren’t edge cases—they’re table stakes in mobile app security.

What comes next?

As AI apps rush to redefine productivity, education, and creativity, the infrastructure powering them, especially on mobile, must be just as robust. The current state of AI app security tells us we’re not there yet.

Expert Opinion

Rishika

icons8-linkedin 1-1

Rishika Mehrotra, Chief Strategy Officer, Appknox, believes that:

quote-orangeThe AI revolution needs a security revolution alongside it. Innovation without protection isn’t just a risk—it’s a liability. Through this series, we set out to spark a conversation—not just about what’s broken, but about what needs to change. I hope it’s served as both a wake-up call and a roadmap.

 

Act before attackers exploit unnoticed security gaps. 

Take control of your app security before it is exposed to silent, sophisticated threats that can compromise your data, reputation, and bottom line.

Sign up for a free trial today and see how Appknox helps secure entire app portfolios with its holistic, binary-based scanning. Join 100+ global enterprises who vouch for Appknox.