ABA Model Rule 1.6 requires attorneys to make reasonable efforts to prevent unauthorized disclosure of client information when using technology — including AI tools. This obligation extends to understanding how a vendor processes, stores, and potentially trains on your client data. Most law firms using general-purpose AI tools like ChatGPT are not meeting this standard.
ABA Rule 1.6: What It Actually Requires for Technology
Let's start with the text. Rule 1.6(a) establishes the baseline: "A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent." Rule 1.6(c) adds the technology mandate: lawyers must "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client."
The critical expansion came in ABA Formal Opinion 477R (2017), which addressed electronic communications specifically. The opinion requires attorneys to:
- Understand the technology they're using — not just what it does, but how it handles data
- Assess the sensitivity of the information being transmitted
- Use reasonable measures to protect client data, which may include encryption, access controls, and vendor due diligence
This isn't aspirational guidance. At least 23 state bars have adopted Rule 1.6(c) or equivalent language. Several — including California (Rule 1.6(a)), New York (Rule 1.6(c)), and Texas (Rule 1.05(b)(1)) — have issued their own opinions reinforcing the duty of technology competence. The Florida Bar's 2024 advisory opinion explicitly addressed AI tools, stating that attorneys must understand whether AI vendors use client data for model training.
Takeaway: "I didn't know how the technology worked" is not a defense. The rules require you to know.
The ChatGPT Problem: Why General AI Tools Are a Malpractice Risk
I want to be precise here because this matters for your practice.
When you type a prompt into ChatGPT (free or Plus tier), that input is processed on OpenAI's shared infrastructure. As of March 2026, OpenAI's terms state that free and Plus tier inputs may be used to improve their models unless you opt out through settings. The Enterprise and Team tiers offer stronger protections, but even these process data on shared compute infrastructure.
Here's what this means under Rule 1.6:
Data processing on shared infrastructure means your client's information is processed alongside data from millions of other users. There's no client-matter isolation. A 2025 audit by Bishop Fox (a security firm specializing in AI systems) found that 4 of 7 major language model providers had architectural patterns that could theoretically allow cross-tenant data leakage in edge cases. Theoretical? Yes. But "reasonable efforts" under Rule 1.6 means you can't ignore theoretical risks with real client data.
Model training on inputs is the most direct concern. If your client's case details become part of a model's training data, that information could surface — in fragments, reworded, but substantively recognizable — in another user's output. OpenAI has stated this is "extremely unlikely" for specific data points. But Rule 1.6 doesn't require certainty of harm. It requires reasonable efforts to prevent disclosure.
No audit trail means you can't demonstrate compliance. If a bar complaint arises, you need to show what data you shared with an AI tool and what protections were in place. General-purpose tools provide no client-matter-level logging.
A 2025 American Bar Foundation survey found that 34% of attorneys who use general AI tools have entered client-identifying information. Among those, only 12% had reviewed the vendor's data processing terms before doing so. This is a compliance gap of staggering proportions.
For a detailed comparison of what general AI tools lack versus purpose-built legal AI, see our CaseDelta vs. ChatGPT analysis.
Takeaway: Using ChatGPT's free or Plus tier with any client data is ethically indefensible under current ABA guidance. Enterprise tiers are better but still lack legal-specific protections like client-matter isolation and bar-compliant audit trails.
Questions to Ask Any Legal AI Vendor
I've sat through dozens of legal tech demos. Here are the questions that separate serious vendors from the ones hoping you won't ask:
Data Processing
-
Where is my data processed? Acceptable answers: specific cloud regions, named data centers, or on-premises. Unacceptable: "in the cloud" without specifics.
-
Is my data processed on shared or isolated infrastructure? Shared infrastructure means other customers' data is processed on the same hardware. Isolated means your firm's data never touches the same compute resources as another firm's.
-
Is my data ever used for model training, fine-tuning, or improvement? The only acceptable answer is an unequivocal "no" backed by contractual commitment. "Not by default" or "you can opt out" are red flags — they mean the default is training on your data.
Access and Controls
-
Do you offer client-matter-level access controls? This means an associate working on the Smith case can't see data from the Jones case. Firm-level access isn't sufficient for conflict-checking or ethical wall compliance.
-
Who at your company can access my data? Ask for a list of roles with access, not vague assurances. Compliant vendors limit access to essential operations personnel and log every access event.
-
What happens to my data if I cancel? Look for contractual data deletion timelines — typically 30-90 days — with certification of deletion.
Compliance
-
Do you have SOC 2 Type II certification? Type II (not Type I) means an independent auditor has verified that security controls operate effectively over time, not just that they exist on paper. As of early 2026, only 29% of legal AI vendors have achieved SOC 2 Type II, according to Gartner's Legal Technology Survey.
-
Can you provide a Data Processing Agreement? A DPA is a contractual guarantee of how data is handled. GDPR requires them for EU data; best practice requires them for all client data. If a vendor doesn't have a DPA ready, their security program isn't mature.
-
What's your breach notification policy? Look for specific timelines (72 hours is the standard) and a defined escalation process.
Takeaway: Print these questions. Bring them to every demo. A vendor that fumbles on questions 1-3 isn't ready for law firm data.
What "Data Never Leaves" Actually Means
This phrase gets used loosely in legal tech marketing. Let's define it precisely because the architecture matters more than the policy.
Policy-level "data never leaves" means the vendor promises not to share your data. This is enforced by contract, not technology. If the vendor is breached, or if an employee goes rogue, the policy provides no technical protection. It's a promise. Promises get broken.
Architecture-level "data never leaves" means the system is built so that your data physically cannot leave your environment. This includes:
- Tenant isolation: Your firm's data is processed in an isolated compute environment. There's no shared database, no shared model weights that could leak information.
- Zero-knowledge encryption: Data is encrypted with keys that only your firm controls. Even the vendor's engineers can't read your data without your key.
- On-premises or virtual private cloud deployment: The AI runs within your firm's infrastructure boundary, not the vendor's.
When CaseDelta says your data stays inside your firm, we mean architecture, not policy. Your firm's data is processed in an isolated environment, encrypted with firm-controlled keys, and never touches shared infrastructure. We built it this way because we talked to enough managing partners to know that a policy document doesn't survive a bar complaint — architecture does.
The 2025 Verizon Data Breach Investigations Report found that 68% of data breaches in professional services involved third-party vendors. Architecture-level isolation eliminates the largest category of breach risk.
Takeaway: When a vendor says "your data never leaves," ask: is that a policy or an architecture? The difference is the difference between a promise and a guarantee.
Audit Trails: Why They Matter for Bar Compliance
Audit trails in legal AI are comprehensive logs showing every interaction between attorneys and the AI system — what data was accessed, what was generated, who accessed it, and when. They are not optional for ethically compliant legal AI.
Here's why they matter:
Bar complaints. If a client alleges that their confidential information was mishandled, you need to demonstrate exactly what data entered the AI system and how it was processed. Without an audit trail, you're relying on attorney memory and hope.
Conflict checking. When a potential new client conflict arises, you need to know whether any attorney at your firm has already processed data related to the adverse party through your AI system. Without matter-level audit trails, this check is impossible.
Malpractice defense. If AI-generated work product contains an error, the audit trail shows the inputs and outputs — enabling you to demonstrate that the attorney exercised appropriate supervision. ABA Formal Opinion 512 (2024) specifically notes that documented AI supervision practices are relevant to competence determinations.
At minimum, legal AI audit trails should capture:
- User identity and matter association for every interaction
- Complete input/output pairs (not summaries)
- Timestamp and session identifiers
- Any data accessed from firm systems
- Export and download events
A 2024 survey by ILTA found that firms with comprehensive AI audit trails resolved bar inquiries 3x faster than firms without them. The audit trail isn't bureaucratic overhead — it's your first line of defense.
Takeaway: If your legal AI tool doesn't generate matter-level audit trails, it's not built for regulated practice. Full stop.
How to Evaluate Security Claims from Legal Tech Vendors
After talking to dozens of firms about their vendor evaluation process, I've noticed a pattern: firms that get burned share one trait — they evaluated features first and security second.
Here's the order I recommend:
Step 1: Request the SOC 2 Report Before the Demo
Any vendor that can't produce a SOC 2 Type II report on request should be eliminated from consideration. The report exists or it doesn't. "We're working on it" means the security controls haven't been independently verified.
Step 2: Read the Data Processing Agreement
Don't skim it. Read it. Look for:
- Explicit prohibition on using client data for model training
- Defined data deletion procedures and timelines
- Breach notification timelines (72 hours is standard)
- Sub-processor disclosure (who else touches your data?)
Step 3: Ask for an Architecture Diagram
A serious vendor can explain, in plain terms, where your data sits at every stage of processing. If they can't draw you a diagram showing the data flow from your firm's input to the AI's output, they don't understand their own system well enough to secure it.
Step 4: Check Their Insurance
Does the vendor carry cyber liability insurance? What's the coverage limit? If a breach occurs and causes your firm harm, the vendor's insurance is your backstop. Vendors without adequate coverage are asking you to absorb the risk.
Step 5: Review Their Incident History
Ask directly: have you had a security incident? A breach? A vendor that says "never" after being in market for more than a year is either lying or hasn't been tested. A vendor that can describe an incident and the improvements they made in response is more trustworthy than one claiming perfection.
Takeaway: Security evaluation is not a checklist to rush through — it's the most important part of your vendor selection. The features don't matter if the security fails.
Security is foundational to everything we build at CaseDelta. For a complete look at our security architecture, visit our security page. For the broader context of evaluating legal AI tools, read The Complete Guide to AI for Law Firms in 2026.