The Compliant AI Blog
AI Surveillance Exposed....by ChatGPT Surveilling customers
OpenAI caught Chinese cyber operations when the PRC used ChatGPT to debug their code
Don’t get us wrong – this is good that OpenAI and other AI providers maintain surveillance of their platforms to make sure they aren’t being used by bad actors and nation states. However, it presents a unique challenge for organizations in regulated environments like Healthcare and Government that need to protect user data.
Last week, OpenAI dropped a bombshell. They discovered a Chinese intelligence operation using AI to monitor Western social media for anti-Chinese content in real-time. A recent investigation reported by The New York Times, has revealed a startling case of A.I.-powered surveillance in action—one that underscores the vulnerabilities of commercial tools like ChatGPT and Claude AI when it comes to security, privacy, and compliance.
Here’s the kicker— we mentioned it earlier, but ChatGPT flagged the issues because OpenAI was monitoring how its own technology and user data was being used. We’re happy that OpenAI helped combat bad actors, but you can’t forget that OpenAI is not private. Another reason to use private and compliant AI in industries with laws to protect data.
This discovery, detailed in a February 21, 2025 New York Times report, isn’t just another tech news story. It’s a wake-up call about the double-edged nature of using commercially available and public AI tools—the same ones your organization might be using right now for everything from drafting emails to analyzing patient data.
Let’s break down what happened, what it means for your security (especially if you’re in healthcare or government), and what you can actually do about it.
In an era where digital privacy is paramount, this story is a wake-up call for industries ranging from healthcare to government. It reveals how sophisticated adversaries exploit AI, and more importantly, how mainstream platforms may inadvertently expose your sensitive data. This post unpacks the findings, explores the technical nuances, and offers guidance on implementing secure, HIPAA-compliant solutions—like those offered by Hathr.AI—that can safeguard your operations without compromising on functionality.
The Alarming Discovery: AI Surveillance in Action
In a groundbreaking report, OpenAI uncovered evidence that a Chinese intelligence operation was employing AI to monitor Western social media in real time. The operation, codenamed “Peer Review”, exploited OpenAI’s own technology to debug its surveillance code—a discovery that highlights a disturbing irony: the very AI models designed to drive innovation can also be repurposed for mass surveillance.
According to the New York Times article by Cade Metz, the Chinese operation wasn’t working in isolation. It was part of a broader campaign that also included a project known as “Sponsored Discontent”, which generated disinformation targeting both dissidents and foreign audiences. This multi-pronged approach to information control reveals how easily powerful AI tools can be subverted for strategic purposes, raising urgent questions about AI privacy and security.
The fascinating part? These sophisticated operators slipped up when they used OpenAI’s technology to help debug their surveillance code—essentially revealing their operation to the very company that makes ChatGPT.
Ben Nimmo, OpenAI’s principal investigator, put it perfectly: “Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models.”
Technical Insights: The Backbone of a Surveillance System
For the tech-savvy, the details are both fascinating and concerning. The Chinese surveillance system reportedly built its framework using Meta’s open-source Llama model (WoW Thanks Zuck for controlling your tech!) — not OpenAI’s own. This choice underscores the accessibility of advanced AI capabilities and serves as a reminder that not all AI implementations are created equal in terms of security. Don’t forget that the PRC wanted to use a foundational model like OpenAI or Claude because the open source model wasn’t up to the task.
Key technical components for AI systems include:
- Real-Time Data Collection: Utilizing APIs and advanced web scraping techniques to aggregate data from social platforms.
- Natural Language Processing: Multi-layered NLP systems parse sentiment, context, and even subtle cues in user-generated content.
- Entity Recognition: Tools identify and track mentions of specific names or organizations, which can be used to tailor surveillance efforts.
- Distributed Computing: Handling massive data volumes across a robust, scalable infrastructure.
These elements come together to create systems that, while impressive in their capability, pose serious risks regarding Chatgpt privacy and claude privacy. With the same models potentially storing and processing sensitive information, the critical question arises: Is ChatGPT HIPAA compliant? Is Claude AI HIPAA compliant?
For organizations dealing with regulated data, the answer often leans toward “no,” highlighting the urgent need for compliant AI alternatives.
The Uncomfortable Truth: Your AI Tools Are Watching You
Let’s talk about the elephant in the room. The only reason OpenAI caught this surveillance operation is because they themselves monitor how people use their technology. Yes—the same ChatGPT you might be using to draft that important email or generate code is watching and analyzing your interactions.
This creates what security professionals call a “surveillance paradox”—where the very tools we rely on for productivity are simultaneously monitoring us, often in ways we don’t fully understand.
“Most people have no idea about the monitoring capabilities built into these platforms,” explains Dr. Helen Nissenbaum, privacy researcher at Cornell Tech. “There’s a profound asymmetry of information where users focus on the convenience while remaining unaware of the surveillance infrastructure underneath.”
For organizations handling sensitive information, this raises serious questions:
– What happens to the prompts you enter into these systems?
– Are they stored? Analyzed? Used for training?
– Who has access to this information?
– What would happen if this data were compromised?
These questions aren’t just academic—they have real implications for compliance, security, and trust.
Privacy experts warn that the monitoring built into these systems can lead to unforeseen consequences. Without robust safeguards, the personal data processed through these platforms may be vulnerable to unauthorized access, data breaches, or misuse. This is particularly problematic for industries bound by stringent regulations, such as healthcare, where AI HIPAA compliance isn’t just recommended—it’s legally mandated.

Compliance and Security in AI: Why It Matters
For organizations handling sensitive or regulated data, ensuring compliance isn’t optional. HIPAA, FedRAMP, and other regulatory frameworks demand that data is managed with utmost care. Unfortunately, many mainstream AI platforms fall short in several critical areas:
- Data Processing Location: Most commercial AI services process your data on third-party servers, which may not meet HIPAA or other regulatory standards.
- Data Retention and Usage: The practice of retaining and using input data for model training can expose sensitive information, raising concerns about both Chatgpt privacy and claude privacy.
- Business Associate Agreements (BAAs): Many popular AI providers do not offer BAAs, leaving healthcare organizations particularly exposed.
- Audit Capabilities: Without comprehensive audit trails, it’s challenging to track who accessed what data and when—an essential requirement for AI Federal Compliance.
So why does these things matter?
- You handle the most sensitive personal data possible
- You’re governed by strict HIPAA compliance requirements
- You’re a prime target for both cybercriminals and state actors
- You likely need AI capabilities to stay competitive
In essence, while these platforms offer innovative solutions, not every tool has the robust security and compliance required for controlled workloads. This gap has spurred the development of specialized solutions—like Hathr AI’s HIPAA compliant chat API for Healthcare and Government users and other compliant AI systems—that are purpose-built to meet the highest standards of data protection.
Dr. Kevin Fu, who previously served as the FDA’s Acting Director of Medical Device Cybersecurity, puts it bluntly: “Healthcare organizations can’t afford to treat AI security as an afterthought. Each new AI integration represents a potential attack surface that must be secured against increasingly sophisticated threats.”
Is the AI you use HIPAA or Federally Compliant? Probably Not.
Let’s get specific about HIPAA compliance and AI. Most mainstream AI platforms—including the ones you’re probably using—were designed for general use, not healthcare compliance. This creates several technical challenges:
- Data processing location:
HIPAA requires covered entities to maintain control over protected health information (PHI), but most AI platforms process your data on their servers. If architected appropriately, companies should know where the servers your data is stored on are, and if the teams can document that those servers have adequate control and protection.
- Data retention:
Many AI providers retain user inputs for model improvement, creating potential compliance risks if those inputs contain PHI, PII, Federal Data, or other proprietary information. Don’t forget that unless you have specific privacy and security implemented on your tool to isolate the information from the AI provider, you will also violate potential disclosure agreements or NDAs.
- Business Associate Agreements:
HIPAA requires BAAs with any vendor handling PHI, but many AI providers don’t offer these agreements for their standard services. For other types of federal information like Controlled Unclassified Information or Classified information, you need appropriate infrastructure, segmentation, and regulatory approval to handle appropriate information.
A healthcare CIO I spoke with recently summed it up perfectly: “We were shocked to discover how many of our staff were using public AI tools for work that potentially involved patient information—with absolutely no controls or visibility.”
In essence, while these platforms offer innovative solutions, they often do so at the expense of robust security and compliance. This gap has spurred the development of specialized solutions—like a HIPAA compliant chat API and other compliant AI systems—that are purpose-built to meet the highest standards of data protection.
National Security Dimensions: Beyond Just Privacy
While healthcare concerns focus on patient privacy and compliance, the national security implications run even deeper. General Paul Nakasone, former commander of U.S. Cyber Command, recently warned that “the weaponization of generative AI for intelligence collection represents a significant evolution in the cyber threat landscape—one that challenges our traditional defensive paradigms.”
What makes AI-powered surveillance particularly concerning from a national security perspective is its scale, speed, and learning capability:
Massive scalability: AI systems can monitor millions of communications simultaneously without human limitations
Cross-language analysis: Modern AI can work across dozens of languages with near-human comprehension
Pattern recognition: AI excels at identifying subtle patterns that human analysts might miss
Continuous operation: Unlike human teams, AI surveillance runs 24/7 without breaks or shift changes
“We’re witnessing the early stages of an AI-enabled intelligence revolution,” explains Dr. Herbert Lin, senior research scholar at Stanford University’s Center for International Security and Cooperation. “The technical complexity of these systems means that assessing capabilities and vulnerabilities requires specialized expertise that bridges traditional intelligence analysis and cutting-edge machine learning.”
For government agencies and contractors, this evolving threat landscape creates urgent needs for AI systems that meet rigorous security and compliance requirements—well beyond what mainstream platforms typically offer.
Compliant AI Technical Solutions You Actually Need
So what does a secure, compliant AI implementation actually look like? Whether you’re in healthcare dealing with HIPAA requirements or government handling sensitive information, you need solutions specifically designed for regulatory compliance.
What Makes an AI Platform HIPAA Compliant?
Truly HIPAA-compliant AI platforms implement several critical technical features:
- End-to-end encryption: All data should be encrypted both in transit and at rest, ideally with customer-controlled keys. Use FIPS 140-2 Standards with AES 256 encryption or better. All APIs should be using TLS 1.3 encryption standards.
- Data isolation architecture: Your data should be processed in isolated environments separate from other customers.
- No training on your data: The provider should contractually commit not to use your inputs to train or improve their models.
- Comprehensive audit logging: Every interaction should be logged with user identification, timestamps, and action details.
- Transparent monitoring: You should know exactly what monitoring occurs and have control over it.
- Deployment flexibility: Options for on-premises or private cloud deployment provide greater data control.
Companies like Hathr.AI have emerged specifically to address these requirements, offering HIPAA-compliant alternatives to mainstream AI platforms. These specialized solutions maintain the functionality you need while implementing the technical controls necessary for regulatory compliance.
Federal Compliance: Beyond HIPAA
Government agencies face even more stringent requirements through frameworks like FedRAMP (Federal Risk and Authorization Management Program). AI systems handling federal information need additional safeguards:
Supply chain security: Verified provenance of model components and training data. Don’t forget Section 889 and being able to trace your products (and who funds them) back to their source.
U.S.-based processing: Data processing restricted to U.S. territories
Advanced threat protection: Make sure the environment has security and privacy standards in place in their organization as well as the environment and servers that hosts their information. Don’t get an audit ready to find out your pristine environment is ruined by a provider that hosts your data on a random server they can’t vouch for.
There’s a reason that Hathr AI distinguishes itself by hosting its data on AWS’ GovCloud (which can hold a range of controlled information, including classified data).
“The integration of AI into federal systems creates a fundamentally new security paradigm,” notes former Department of Defense CISO Brett Goldstein. “We’re no longer simply protecting static data—we’re securing systems that can generate new information and potentially reveal patterns that weren’t explicitly encoded.”
How to Evaluate Your AI Provider: Questions You Should Be Asking
Not all AI platforms are created equal when it comes to security and compliance. Here are the questions you should be asking any AI provider you’re considering—especially for sensitive applications.
The answers to these questions can reveal whether a provider has built their platform with compliance in mind or merely added it as an afterthought.:
1. Data Processing and Storage
- “What encryption do you implement for data in transit and at rest?”
- “How is my data segregated from other customers?”
- “What security certifications have you obtained?”
- “Have you conducted independent security assessments?”
2. Security Measures
- “What encryption do you implement for data in transit and at rest?”
- “How is my data segregated from other customers?”
- “What security certifications have you obtained?”
- “Have you conducted independent security assessments?”
3. Compliance Documentation
“Can you provide a HIPAA Business Associate Agreement?”
“What level of FedRAMP authorization does your environment have, if any?”
“Can you provide compliance documentation specific to AI systems?”
“Do you have SOC 2 Type II certification or another certification that exceeds those standards?”
“Can you deploy your tool to our environment?”
“Do you follow NIST 800 series data protect and security standards?”
4. Monitoring Transparency
“What aspects of my usage do you monitor?”
“How is monitoring data stored and protected?”
“Who has access to my usage data within your organization?”
“Can I opt out of certain types of monitoring?”
Why Choose Hathr.AI: A Compliant AI Alternative for the Modern Age
Amid these challenges, Hathr.AI stands out as a beacon of secure and compliant AI technology. Unlike mainstream platforms that may compromise on AI privacy and compliance, Hathr.AI is engineered with a focus on HIPAA compliance and Federal Compliance. Their solutions include:
HIPAA Compliant & NIST800-171 Chat API: Ensuring that sensitive healthcare data is processed in a secure, compliant environment.
End-to-End Encryption and Data Isolation: Providing robust protection for your data at every stage.
No Data Reuse for Training: Guaranteeing that your inputs remain confidential and are never repurposed.
Comprehensive Audit Trails: Delivering the transparency required to meet strict regulatory standards.
For organizations questioning, “Is ChatGPT HIPAA compliant?” or “Is Claude AI HIPAA compliant?”, the answer is often “no.” In contrast, Hathr.AI offers a truly compliant AI solution that bridges the gap between innovative AI capabilities and the rigorous demands of data privacy and security.
Conclusion: Navigating the New Compliant AI Landscape with Confidence
The revelation of an A.I.-powered surveillance tool in the New York Times report serves as a stark reminder of the hidden risks in our current AI ecosystem. Whether it’s ChatGPT privacy concerns or the broader issues of AI privacy and compliance, the message is clear: not all AI is created equal.
For industries handling sensitive data, particularly healthcare, the need for a secure, compliant AI solution is more critical than ever. By adopting best practices in data management, implementing robust technical safeguards, and choosing platforms designed for compliance—like Hathr.AI—you can harness the transformative power of AI without compromising on security.
In a world where technology is both a tool and a potential threat, ensuring that your AI systems are built with compliance in mind is not just a regulatory necessity—it’s a strategic imperative. Embrace the future with confidence, secure your data, and choose a path that prioritizes HIPAA compliant chat API solutions and overall compliant AI practices.
References
- https://www.nytimes.com/2025/02/21/technology/openai-chinese-surveillance.html