Adversarial AI: How Threat Actors Are Targeting Healthcare Machine Learning
Post Summary
Adversarial AI is a growing threat to healthcare, targeting machine learning systems used for critical tasks like diagnostics, patient care, and resource management. These attacks manipulate AI models to cause errors that appear normal, posing risks to patient safety and data integrity. Here's what you need to know:
-
Types of Attacks:
- Data Poisoning: Corrupts training data to embed malicious logic, leading to systematic errors like misdiagnoses.
- Evasion Attacks: Alters input data to trick models, causing inaccurate predictions (e.g., manipulated X-rays).
- Model Inversion/Extraction: Steals sensitive training data or replicates proprietary models, exposing patient privacy and intellectual property.
-
Why Healthcare is Targeted:
- AI systems handle sensitive, life-critical decisions.
- Attackers exploit gaps in security, with state-backed groups demanding ransoms up to $15 million.
- 97% of compromised AI systems lack proper protections.
-
Impact:
- Patient safety risks: Faulty diagnoses, delayed treatments, and harmful recommendations.
- Financial losses: Average data breach costs in healthcare reached $9.77 million in 2025.
- Operational disruptions: Compromised systems can cascade errors across networks.
-
Defense Strategies:
- Adversarial Training: Introduce malicious examples during training to build model resilience.
- Randomized Smoothing: Add noise to inputs to obscure attack vectors.
- AI-Specific Security: Use tools to detect anomalies in model behavior.
- Risk Management Platforms: Centralize oversight and enforce security policies.
Healthcare organizations must act now to protect their AI systems by implementing tailored defenses, monitoring for anomalies, and complying with updated regulations like HIPAA and FDA guidelines. Failure to address these threats could jeopardize patient safety, data security, and organizational stability.
Trusted CI Webinar: Securing Medical Imaging AI Models Against Adversarial Attacks
sbb-itb-535baee
Types of Adversarial Attacks in Healthcare
Three Types of Adversarial AI Attacks on Healthcare Systems
Adversarial attacks on healthcare AI systems fall into three primary categories, each targeting different vulnerabilities in machine learning models. These attacks can have serious consequences for patient safety and the reliability of clinical decisions. Let’s break down how these attacks work and their potential effects on healthcare AI.
Data Poisoning Attacks
Data poisoning attacks compromise AI models during the training phase by introducing malicious data. Unlike attacks that occur during the model's operational use, these attacks embed false logic directly into the AI’s learning process. This can lead to systematic failures, such as misdiagnoses or flawed decision-making.
What makes these attacks particularly dangerous is their scale. Research shows that even a small number of poisoned samples - just 100 to 500 - can significantly undermine a healthcare AI system, with success rates often exceeding 60% [3]. For example, radiology AI systems can be manipulated with as few as 250 tampered images, representing a mere fraction (0.001% to 0.025%) of typical training datasets [3][4].
"Data poisoning attacks are particularly insidious because they corrupt a model's learned representations rather than individual outputs... the corruption resides within the model's learned weights rather than in any detectable external manipulation."
- Farhad Abtahi et al., Journal of Medical Internet Research [3]
In healthcare, the implications are severe. Poisoned models in radiology or pathology might be trained to overlook critical conditions like tumors or pneumonia in specific demographic groups. Similarly, clinical language models used for treatment recommendations could be manipulated during their "Reinforcement Learning from Human Feedback" phase, resulting in biased or unsafe medication suggestions. Even resource allocation systems, such as those used for organ transplants or ICU triage, could be skewed to favor particular demographics or institutions.
Detecting poisoned models is particularly challenging. These models often pass standard validation tests and only reveal their flaws when triggered by specific inputs. In healthcare, such errors can mimic natural biases or dataset shifts, delaying detection for months or even years [3][4].
"A poisoned model might pass standard evaluations. It might behave correctly most of the time. But when it fails, it fails in a way the attacker intended."
Here are some common methods of data poisoning and their potential effects:
| Attack Type | Method | Healthcare Impact Example |
|---|---|---|
| Label Modification | Adding mislabeled data points to the training set | A pneumonia classifier labels infected lungs as "normal" |
| Poison Insertion | Injecting crafted samples with "triggers" | A white square on an X-ray causes the AI to ignore a tumor [5] |
| Data Modification | Editing existing records without changing dataset size | Subtle changes in EHR data influence triage decisions |
| Boiling Frog | Introducing small changes over multiple training cycles | Gradual bias in resource allocation models over time |
Evasion Attacks
Evasion attacks manipulate input data during the AI model's active use to produce incorrect predictions. These attacks rely on subtle "perturbations" - small changes to medical images, ECG signals, or clinical text - that push the data across the model’s classification boundary.
A study conducted in December 2025 by Kristine T. Soberano and Kristine A. Condes demonstrated the impact of these attacks. By applying techniques like FGSM, PGD, and JSMA, they caused significant drops in accuracy: CNN models fell from 92% to 40%, ECG-based AI systems saw a 42% decline, and Transformer-based clinical NLP models experienced a 30% drop [6].
"Adversarial attacks are especially concerning within medicine as they may directly impact human wellbeing."
Healthcare AI systems are particularly vulnerable because they often depend on specific pixels or data points for diagnosis. Attackers exploit this by targeting those exact regions with perturbations. Alarmingly, these changes are often invisible to the human eye, meaning a radiologist might see a normal X-ray while the AI interprets it as indicating a specific condition.
Beyond real-time input manipulation, attackers may also aim to reconstruct or steal entire models.
Model Inversion and Extraction
Model inversion and extraction attacks aim to reverse-engineer machine learning models, either to steal sensitive training data or to replicate the proprietary algorithms themselves. These attacks jeopardize both patient privacy and the intellectual property of healthcare organizations.
Model inversion attacks focus on reconstructing training data. By repeatedly querying the model, attackers can piece together sensitive information. For example, in one healthcare scenario, attackers used synthetic images to query a diagnostic model, ultimately recovering protected health information (PHI) from the training set. Such breaches would trigger mandatory notifications under HIPAA [7]. A foundational example of this attack occurred in 2015 when researchers Fredrikson et al. reconstructed facial images from a recognition system using only API access and a target’s name [7].
Model extraction attacks, on the other hand, aim to steal the model itself. By systematically querying the AI, attackers can recreate its functionality and parameters. This compromises an organization’s intellectual property and can have significant financial consequences. In 2025, the average cost of a data breach in the healthcare sector reached $9.77 million per incident [7].
The scope of this vulnerability is extensive. In 2025, 13% of organizations reported breaches targeting AI systems, and 97% of compromised systems lacked proper AI-specific access controls [7]. Smaller datasets, often used in healthcare AI, are particularly at risk due to overfitting, which makes it easier for attackers to reconstruct individual training examples. Under regulations like HIPAA and GDPR, machine learning models trained on personal data may themselves be considered personal data, meaning a successful inversion attack could be classified as a formal data breach [7].
How Adversarial Attacks Affect Healthcare AI
Adversarial attacks on healthcare AI systems go beyond technical disruptions - they have far-reaching effects on patient care and the financial health of organizations. These attacks erode trust in AI technologies, jeopardize lives, and create significant operational challenges. For healthcare leaders, grasping the consequences of these vulnerabilities is essential to strengthening AI defenses.
Accuracy and Performance Degradation
Adversarial attacks can severely damage the performance of healthcare AI models, often rendering them unreliable for clinical use. A study published in Scientific Reports (2025) highlighted this issue using the ResNet-152 architecture and the NIH Chest X-Ray Dataset. Researchers found that "BadNets" attacks caused a pneumonia detection model to misinterpret key clinical features, instead focusing on a small white square deliberately placed in the image's corner. This manipulation caused the model's AUC (a measure of diagnostic accuracy) to plummet from 0.85 to 0.49, essentially reducing its reliability to that of a coin flip [5].
"Attack success depends on the absolute number of poisoned samples rather than their proportion of the training corpus, a finding that fundamentally challenges assumptions that larger datasets provide inherent protection."
- Farhad Abtahi, PhD, Karolinska Institutet [3]
Even with large datasets, healthcare AI systems remain vulnerable. Research shows that as few as 100 to 500 poisoned samples can achieve attack success rates of 60% or more, regardless of dataset size [3]. Such performance degradation directly undermines clinical reliability, putting patient outcomes at risk.
Patient Safety Risks
The consequences of compromised AI systems in healthcare can be devastating. These systems, when manipulated, can deliver incorrect diagnoses, delay treatments, or even suggest harmful interventions.
For instance, a Nature Communications study from January 2025 revealed how adversarial attacks on large language models could lead to dangerous clinical errors. Manipulated systems recommended ibuprofen for patients with renal disease and increased the rate of unsafe drug combination recommendations from 0.50% to a staggering 80.60% [8].
"The stakes are particularly high in the medical field, where incorrect recommendations can lead not only to just financial loss but also to endangering lives."
- Nature Communications [8]
These attacks often go unnoticed for extended periods, sometimes lasting six months to several years [3]. During this time, patients may receive flawed diagnoses or treatments. One 2026 threat analysis revealed that a radiology AI, compromised by an insider, disproportionately missed early-stage lung cancer in certain demographic groups. The issue was only uncovered years later during a retrospective review [3].
Adversarial attacks also pose a threat to public health. For example, attacks on GPT-4 led to vaccine recommendation rates dropping from 100% to just 3.98%, with the system generating persuasive arguments against vaccination. Such manipulations make it difficult for even experts to identify when a system has been compromised [8].
Financial and Operational Consequences
The fallout from these attacks isn't limited to clinical outcomes - it extends to the financial and operational stability of healthcare organizations. Disruptions in workflows, regulatory fines, and financial losses are common repercussions.
State-backed threat actors are increasingly targeting U.S. healthcare facilities, integrating adversarial AI techniques into larger extortion schemes [2].
"The era of episodic ransomware has evolved into a landscape defined by persistent, state-backed extortion and the emergence of safety-critical vulnerabilities within clinical AI systems."
- Threat Landscape Team [2]
Adversarial attacks can also be exploited for financial fraud. For example, attackers might manipulate AI systems to inflate billing codes, skew clinical trial results, or reduce insurance payouts [5]. Resource allocation systems, such as those used for organ transplants or ICU bed assignments, are another high-value target. Poisoning attacks on these systems can lead to biased decisions, deprioritizing certain demographics [3].
The risks extend across the entire healthcare ecosystem. A single attack on a commercial medical foundation model vendor could compromise systems in 50 to 200 healthcare institutions. This domino effect can disrupt operations for thousands of patients, impacting critical processes like scheduling, triage, and laboratory workflows. Persistent memory poisoning in these systems not only creates diagnostic errors but also introduces bottlenecks that amplify both safety risks and financial losses [3].
Why Defending Against Adversarial AI is Difficult
Healthcare organizations are facing a growing challenge: traditional cybersecurity tools just aren’t equipped to handle adversarial AI threats. These attacks don’t exploit system vulnerabilities in the usual way. Instead, they manipulate the learning mechanisms of AI models, making it essential to understand why conventional defenses fall short and how to address these risks effectively.
Why Traditional Cybersecurity Falls Short
Conventional tools like firewalls, antivirus software, and intrusion detection systems were designed to guard against attacks targeting system code. But adversarial AI attacks take a different route - they corrupt the internal workings of AI models, such as altering their weights, which are the parameters that help the model interpret data.
Here’s the tricky part: adversarial attacks don’t leave the usual traces like malware does. For example, data poisoning embeds false patterns directly into a model’s decision-making process. These corrupted outputs can look perfectly normal during standard validations. Imagine a radiology AI that seems to work fine during quality checks but consistently misses early-stage lung cancers in certain patient groups. That’s the kind of risk we’re talking about [3].
The belief that large datasets can dilute malicious samples is another misconception. Research shows that even a small number of poisoned samples - just 100 to 500 - can compromise a healthcare AI system, with success rates exceeding 60% [3][9].
Another problem? Traditional security assumes attackers need privileged access to cause harm. But adversarial AI doesn’t require admin-level access. Something as routine as entering clinical notes, uploading medical images, or documenting patient visits can be enough to introduce poisoned data over time, all without raising any alarms [9].
Healthcare-Specific Vulnerabilities
Healthcare systems face unique challenges that make them especially vulnerable to adversarial attacks. Privacy regulations like HIPAA and GDPR are crucial for protecting patient data, but they also create blind spots.
"Existing privacy regulations, including the Health Insurance Portability and Accountability Act and the General Data Protection Regulation, can hinder anomaly detection and cross-institutional audits, reducing visibility into adversarial actions." - Farhad Abtahi, PhD, Karolinska Institutet [3]
These regulations limit the ability to correlate data across patients or institutions, which is essential for spotting coordinated poisoning campaigns. Even when unusual patterns in AI predictions are detected, privacy laws often prevent organizations from collaborating effectively with others facing similar issues.
Adversarial attacks also mimic existing biases in healthcare, making them hard to detect. If a model starts making errors that affect specific demographics, it can easily be mistaken for natural dataset shifts or pre-existing clinical biases. In some cases, it takes months - or even years - to recognize the issue. For instance, poisoning just 250 images in a dataset of one million (a mere 0.025%) can lead to diagnostic failures that evade detection during routine checks [3].
Federated learning, a common approach in healthcare to keep patient data decentralized, adds another layer of difficulty. While it protects privacy by avoiding centralized data storage, it makes it harder to trace the source of malicious updates. If a poisoned model update occurs, pinpointing the exact node responsible can be nearly impossible [3][9].
The reliance on a small number of commercial foundation model vendors introduces yet another risk. Many healthcare organizations use AI systems from the same vendors, meaning a single breach at the vendor level could poison models across dozens - or even hundreds - of institutions [9]. This concentration of risk makes the entire ecosystem more vulnerable to cascading failures.
How Attacks Spread Across Healthcare Networks
Adversarial attacks don’t just exploit weaknesses in individual models - they take advantage of how interconnected healthcare systems are to amplify their impact. One of the most dangerous pathways is through clinical documentation pipelines. If an AI medical scribe or documentation system is compromised, it can inject corrupted data directly into Electronic Health Records (EHRs). Any clinical AI systems retrained on this poisoned data inherit the corruption [3][9].
This creates a domino effect. For example, a compromised AI scribe might subtly alter symptom descriptions in patient records. These changes then influence diagnostic AI tools, laboratory algorithms, and triage systems, spreading the corruption across the entire network. Because this happens through normal data flows, traditional monitoring tools often fail to detect it.
The rise of agentic AI systems - autonomous tools that manage tasks like scheduling, triage, and lab workflows - adds even more risk. These systems don’t just make predictions; they take actions that influence other processes. If such a system is poisoned, it could systematically misallocate resources or deprioritize care for certain patient groups [3].
Given how interconnected healthcare systems are, a targeted attack on one component can quickly escalate into a full-blown institutional crisis. Each compromised system may appear to function normally, but the subtle biases in its outputs can jeopardize patient safety and erode trust in AI-driven care. Understanding these vulnerabilities is a critical first step toward developing effective defenses against adversarial AI.
How to Defend Against Adversarial AI Threats
Protecting healthcare AI systems from adversarial attacks requires a different strategy than traditional cybersecurity. The key is to adopt methods tailored to healthcare environments and implement them effectively. Let’s explore some proven techniques.
Adversarial Training
To safeguard systems, healthcare AI models must learn to detect and resist malicious patterns. This is where adversarial training comes in - it involves adding crafted adversarial examples to the training dataset, teaching the model to recognize and reject harmful inputs [10].
Striking the right balance in adversarial data is critical. Research suggests that including 15% to 30% adversarial examples works best. Using less than 5% offers little protection, while over 50% can lead to an overly cautious system that rejects legitimate queries [10].
For instance, a balanced approach (30% adversarial data) can cut attack success rates from 45% to 12%, maintain a helpfulness score of 4.0 out of 5, and keep the over-refusal rate at just 3.5%. In contrast, excessive training (50% adversarial data) might lower attack success rates to 5% but also drop the helpfulness score to 3.2 and increase the over-refusal rate to 12%, limiting clinical usefulness [10].
| Mix Ratio (Adversarial %) | Attack Success Rate | Helpfulness Score | Over-Refusal Rate | Best For |
|---|---|---|---|---|
| 15% | 25% | 4.3/5 | 1.2% | Low-risk applications |
| 30% | 12% | 4.0/5 | 3.5% | Most healthcare systems |
| 50% | 5% | 3.2/5 | 12% | High-risk only |
To stay ahead of evolving threats, healthcare organizations should regularly update training datasets with real-world attack attempts. These datasets should include a variety of attack types, such as prompt injection, jailbreaking, and data extraction, to avoid overfitting to one specific threat [10].
For medical imaging systems, preprocessing techniques like input denoising and image compression can weaken adversarial alterations before they reach the AI model. Additionally, using multiple models together (model ensembling) can reduce the risk of a single point of failure [10].
Randomized Smoothing
Randomized smoothing adds controlled noise to inputs, making it harder for attackers to manipulate model outputs. This technique works by averaging predictions from multiple noisy versions of the same input, effectively blurring the decision boundary that adversaries target.
In healthcare imaging, randomized smoothing is especially effective against pixel-level attacks, neutralizing manipulations while maintaining diagnostic accuracy. By introducing randomness, this method forces attackers to use stronger, more detectable perturbations, which can be flagged by human reviewers.
AI-Specific Security Tools
Traditional security tools focus on system-level threats, but AI-specific tools are designed to detect unusual behaviors in model outputs. These tools monitor for anomalies in predictions, shifts in confidence levels, or unexpected activation patterns that could signal an attack.
For example, anomaly detection systems can flag inputs that deviate from expected patterns, whether in medical imaging or clinical documentation. Techniques like defensive distillation, which softens class probabilities, and gradient masking can also enhance security [1].
Combining these tools with centralized oversight provides a more robust defense.
Using Risk Management Platforms
Given the interconnected nature of healthcare systems, centralized risk management is essential. Platforms like Censinet RiskOps™ provide visibility across the entire healthcare ecosystem, from third-party AI vendors to internal deployments. These platforms help teams monitor vulnerabilities, enforce policies, and coordinate responses effectively.
Censinet RiskOps™ offers features such as automated tracking of product integrations, identification of risk exposures, and generation of summary reports. This operational framework ensures security at scale. Additionally, Censinet AI™ streamlines risk assessments for AI systems and vendors, acting like an “air traffic control” system for AI risks, ensuring continuous oversight and swift action against vulnerabilities.
Regulatory and Compliance Requirements
Adversarial AI attacks present serious risks to patient safety and regulatory compliance. Starting February 16, 2026, healthcare organizations are required to conduct AI-specific risk analyses under the updated HIPAA Security Rule. These assessments must address threats like prompt injection, model inversion, and training data leakage [12]. Navigating how HIPAA and FDA regulations influence defense strategies is now a critical aspect of maintaining compliance and security.
HIPAA and Patient Data Security
The updated HIPAA regulations mandate that healthcare organizations address vulnerabilities unique to AI systems to protect patient data. Adversarial attacks, such as prompt injection or model inversion, can lead to breaches that must be reported under the HIPAA Breach Notification Rule. For instance, a prompt injection attack exposing another patient's clinical data constitutes a reportable breach [11]. Similarly, model inversion attacks that extract protected health information (PHI) from training data trigger breach notifications, with penalties reaching up to $2.13 million per violation category in 2026 [12].
Traditional Business Associate Agreements (BAAs) often fall short in addressing AI-specific concerns. For example, they may not clarify whether vendors can use PHI for training models or how embedded patient data should be managed after a contract ends [11]. A compliance expert emphasized, "Updating your BAAs for AI is not optional - it's a core HIPAA requirement" [11].
While HIPAA and GDPR aim to safeguard patient data, their restrictions on cross-patient correlation can delay the detection of poisoning campaigns by 6 to 12 months [3][9].
FDA Oversight of AI/ML-Enabled Medical Devices

HIPAA focuses on data security, but the FDA oversees the safety and effectiveness of AI-enabled medical devices. To date, the FDA has approved over 1,000 such devices for marketing in the U.S. [11][13]. In January 2025, the FDA released draft guidance that requires manufacturers to address specific adversarial threats, including data poisoning, model inversion, model evasion, data leakage, and performance drift caused by malicious manipulation [15][13].
Under Section 524B of the FD&C Act, manufacturers are now required to provide a Software Bill of Materials (SBOM), a postmarket vulnerability management plan, and evidence of a secure, lifecycle-based development process [16]. Premarket submissions must also include detailed threat models, risk analyses, and results from security testing methods like penetration testing and fuzzing [16].
"AI-enabled medical device makers need to consider cybersecurity issues in the life cycle of their products because cybersecurity threats can compromise the safety and/or effectiveness of a device, potentially resulting in harm to patients." – Betsy Hodge, Partner, Akerman [13]
The FDA’s "Total Product Life Cycle" approach emphasizes ongoing postmarket monitoring of AI model performance and security to address emerging risks [14][16].
Balancing AI Adoption with Security
Healthcare organizations must juggle the adoption of AI technologies with stringent regulatory requirements and the need to mitigate adversarial risks. The February 2026 HIPAA updates recognize that agentic AI systems - those capable of autonomously accessing, interpreting, and acting on PHI - demand tailored regulatory measures [12].
California's AB 489, effective January 1, 2026, adds another layer of responsibility by requiring healthcare providers to disclose the use of AI in patient diagnosis and treatment. It also mandates offering patients a human-only review option [12]. Additionally, organizations must maintain SBOMs for all AI systems to track third-party vulnerabilities [16] and update their Notice of Privacy Practices to reflect how AI systems utilize PHI, particularly when data is used for model refinement [11].
Platforms like Censinet RiskOps™ and Censinet AI™ help healthcare organizations manage these complex requirements. These tools centralize AI-related assessments, policies, and tasks, ensuring consistent oversight and accountability. By adopting such platforms, organizations can not only meet regulatory demands but also bolster their defenses against adversarial AI threats.
Conclusion: Protecting Healthcare AI from Adversarial Threats
The challenges and defenses we've explored highlight the pressing need for healthcare organizations to safeguard their AI systems from adversarial threats. Here's a summary of key insights and actionable steps to bolster security.
Key Takeaways
Adversarial AI attacks are no longer hypothetical - they are happening now, posing risks to patient safety, data accuracy, and the smooth operation of healthcare systems. Techniques like data poisoning, evasion attacks, and model extraction target the very algorithms healthcare AI relies on, making traditional cybersecurity measures insufficient. Since these attacks bypass network-level defenses, organizations need tailored, AI-focused solutions.
To counter these threats, a multi-layered approach is essential. Combining methods like adversarial training, randomized smoothing, and AI-specific security tools with robust risk management strategies provides a more comprehensive defense. Relying on a single solution isn't enough. Monitoring for signs such as unexpected drops in model performance or unusual prediction patterns is crucial. Establishing baseline metrics before deploying AI systems allows for quicker identification of potential attacks.
Next Steps for Healthcare Organizations
To strengthen defenses, start by conducting an AI inventory and risk assessment. Identify all machine learning systems in use, assess their importance to patient care and data sensitivity, and evaluate their vulnerability to adversarial threats. Establish baseline performance metrics for each system and monitor them regularly for anomalies. Develop clear incident response plans that outline escalation procedures, containment steps, and communication strategies specific to adversarial AI attacks.
Consider leveraging tools like Censinet's risk management platforms to enhance threat detection and streamline compliance efforts. With features like automated risk assessments and configurable review processes, platforms like Censinet AI™ help connect AI security with broader clinical risk management, ensuring that automation complements human decision-making.
Finally, form cross-functional teams to integrate adversarial AI defenses into everyday clinical workflows. Assign clear roles for AI security oversight, and establish governance structures to ensure critical findings reach the right stakeholders. By embedding these defenses into core operations, healthcare organizations can continue to innovate while prioritizing patient safety and data protection.
FAQs
How can we tell if a model has been poisoned without obvious errors?
Detecting model poisoning when errors aren't obvious can be tricky, but it's not impossible. AI explainability tools can help by highlighting unusual behavior, such as shifts in predictions or unexpected changes in feature importance. Another effective approach is to monitor performance on clean, unseen datasets. This can uncover subtle problems, like a drop in accuracy caused by minor adversarial tweaks.
To stay ahead of potential threats, it's important to conduct regular audits and use statistical analyses to identify inconsistencies. Incorporating adversarial training into your workflow can also help detect and address poisoning attempts early on. These strategies, when used together, can make it easier to spot and fix issues before they escalate.
What’s the fastest way to reduce evasion risk in medical imaging AI?
The fastest way to lower evasion risks in medical imaging AI is by using adversarial training and randomized smoothing. These approaches enhance the model's ability to withstand adversarial attacks, ensuring its performance remains consistent and dependable.
When does a model inversion attack become a HIPAA reportable breach?
A model inversion attack qualifies as a HIPAA reportable breach if it leads to the exposure of protected health information (PHI) or compromises patient data in ways that violate HIPAA regulations. This includes scenarios such as unauthorized access, re-identifying individuals from anonymized data, or breaches that surpass HIPAA's privacy and security thresholds.
