MTC: Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age

Modern attorneys need to tackle AI ethics and privacy risks.

The legal profession stands at a critical crossroads as generative AI tools like ChatGPT become increasingly integrated into daily practice. While these technologies offer unprecedented efficiency and insight, they also raise urgent questions about client privacy, data security, and professional ethics—questions that every lawyer, regardless of technical proficiency, must confront.

Recent developments have brought these issues into sharp focus. OpenAI, the company behind ChatGPT, was recently compelled to preserve all user chats for legal review, highlighting how data entered into generative AI systems can be stored, accessed, and potentially scrutinized by third parties. For lawyers, this is not a theoretical risk; it is a direct challenge to the core obligations of client confidentiality and the right to privacy.

The ABA Model Rules and Generative AI

The American Bar Association’s Model Rules of Professional Conduct are clear: Rule 1.6 requires lawyers to “act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure”. This duty extends beyond existing clients to former and prospective clients under Rules 1.9 and 1.18. Crucially, the obligation applies even to information that is publicly accessible or contained in public records, unless disclosure is authorized or consented to by the client.

Attorneys need to explain generative AI privacy concerns to client.

The ABA’s recent Formal Opinion 512 underscores these concerns in the context of generative AI. Lawyers must fully consider their ethical obligations, including competence, confidentiality, informed consent, and reasonable fees when using AI tools. Notably, the opinion warns that boilerplate consent in engagement letters is not sufficient; clients must be properly informed about how their data may be used and stored by AI systems.

Risks of Generative AI: PII, Case Details, and Public Data

Generative AI tools, especially those that are self-learning, can retain and reuse input data, including Personally Identifiable Information (PII) and case-specific details. This creates a risk that confidential information could be inadvertently disclosed or cross-used in other cases, even within a closed firm system. In March 2023, a ChatGPT data leak allowed users to view chat histories of others, illustrating the real-world dangers of data exposure.

Moreover, lawyers may be tempted to use client public data—such as court filings or news reports—in AI-powered research or drafting. However, ABA guidance and multiple ethics opinions make it clear: confidentiality obligations apply even to information that is “generally known” or publicly accessible, unless the client has given informed consent or an exception applies. The act of further publicizing such data, especially through AI tools that may store and process it, can itself breach confidentiality.

Practical Guidance for the Tech-Savvy (and Not-So-Savvy) Lawyer

Lawyers can face disciplinary hearing over unethical use of generative AI.

The Tech-Savvy Lawyer.Page Podcast Episode 99, “Navigating the Intersection of Law Ethics and Technology with Jayne Reardon and other The Tech-Savvy Lawyer.Page postings offer practical insights for lawyers with limited to moderate tech skills. The message is clear: lawyers must be strategic, not just enthusiastic, about legal tech adoption. This means:

  • Vetting AI Tools: Choose AI platforms with robust privacy protections, clear data handling policies, and transparent security measures.

  • Obtaining Informed Consent: Clearly explain to clients how their information may be used, stored, or processed by AI systems—especially if public data or PII is involved.

  • Limiting Data Input: Avoid entering sensitive client details, PII, or case specifics into generative AI tools unless absolutely necessary and with explicit client consent.

  • Monitoring for Updates: Stay informed about evolving ABA guidance, state bar opinions, and the technical capabilities of AI tools.

  • Training and Policies: Invest in ongoing education and firm-wide policies to ensure all staff understand the risks and responsibilities associated with AI use.

Conclusion

The promise of generative AI in law is real, but so are the risks. As OpenAI’s recent legal challenges and the ABA’s evolving guidance make clear, lawyers must prioritize privacy, confidentiality, and ethics at every step. By embracing technology with caution, transparency, and respect for client rights, legal professionals can harness AI’s benefits without compromising the foundational trust at the heart of the attorney-client relationship.

MTC

BOLO: LexisNexis Data Breach: What Legal Professionals Need to Know Now—and Why All Lexis Products Deserve Scrutiny!

LAWYERS NEED TO BE BOTH TECH-SAVVY AND Cyber-SavvY!

On December 25, 2024, LexisNexis Risk Solutions (LNRS)—a major data broker and subsidiary of LexisNexissuffered a significant data breach that exposed the personal information of over 364,000 individuals. This incident, which went undetected until April 2025, highlights urgent concerns for legal professionals who rely on LexisNexis and its related products for research, analytics, and client management.

What Happened in the LexisNexis Breach?

Attackers accessed sensitive data through a third-party software development platform (GitHub), not LexisNexis’s internal systems. The compromised information includes names, contact details, Social Security numbers, driver’s license numbers, and dates of birth. Although LexisNexis asserts that no financial or credit card data was involved and that its main systems remain secure, the breach raises red flags about the security of data handled across all Lexis-branded platforms.

Why Should You Worry About Other Lexis Products?

LexisNexis Risk Solutions is just one division under the LexisNexis and RELX umbrella, which offers a suite of legal, analytics, and data products widely used by law firms, courts, and corporate legal departments. The breach demonstrates that vulnerabilities may not be limited to one product or platform; third-party integrations, development tools, and shared infrastructure can all present risks. If you use LexisNexis for legal research, client intake, or case management, your clients’ confidential data could be at risk—even if the breach did not directly affect your specific product.

Ethical Implications: ABA Model Rules of Professional Conduct

ALL LawyerS NEED TO BE PREPARED TO FighT Data LeakS!

The American Bar Association’s Model Rules of Professional Conduct require lawyers to safeguard client information and maintain competence in technology. Rule 1.6(c) mandates that attorneys “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Rule 1.1 further obligates lawyers to keep abreast of changes in law and its practice, including the benefits and risks associated with relevant technology.

In light of the LexisNexis breach, lawyers must:

  • Assess the security of all third-party vendors, including legal research and data analytics providers.

  • Promptly notify clients if their data may have been compromised, as required by ethical and sometimes statutory obligations.

  • Implement additional safeguards, such as multi-factor authentication and regular vendor risk assessments.

  • Stay informed about ongoing investigations and legal actions stemming from the breach.

What Should Legal Professionals Do Next?

  • Review your firm’s use of LexisNexis and related products.

  • Ask vendors for updated security protocols and breach response plans.

  • Consider offering affected clients identity protection services.

  • Update internal policies to reflect heightened risks associated with third-party platforms.

The Bottom Line

The LexisNexis breach is a wake-up call for the legal profession. Even if your primary Lexis product was not directly affected, the interconnected nature of modern legal technology means your clients’ data could still be at risk. Proactive risk management and ethical vigilance are now more critical than ever.