MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

🚨 AWS Outage Resolved: Critical Ethics Guidance for Lawyers Using Cloud-Based Legal Services

Legal professionals don’t react but act when your online legal systems are down!

Amazon Web Services experienced a major outage on October 20, 2025, disrupting legal practice management platforms like Clio, MyCase, PracticePanther, LEAP, and Lawcus. The Domain Name Service (DNS) resolution failure in AWS's US-EAST-1 region was fully mitigated by 6:35 AM EDT after approximately three hours. BUT THIS DOES NOT MEAN THEY HAVE RESOLVED ALL OF THE BACK ISSUES THAT ORIGINATED DUE TO THE OUTAGE at the time of this posting.  Note: DNS - the internet's phone book that translates human-readable web addresses into the numerical IP addresses that computers actually use. When DNS fails, it's like having all the street signs disappear at once. Your destination still exists, but there's no way to find it.

Try clearing your browser’s cache - that may help resolve some of the issues.

‼️ TIP! ‼️

Try clearing your browser’s cache - that may help resolve some of the issues. ‼️ TIP! ‼️

Legal professionals, what are your protocols when your online legal services are down?!

Lawyers using cloud-dependent legal services must review their ethical obligations under ABA Model Rules 1.1 and comment [8] (technological competence), 1.6 (confidentiality), and 5.3 (supervision of third-party vendors). Key steps include: documenting the incident's impact on client matters (if any), assessing whether material client information was compromised, notifying affected current clients if data breach occurred, reviewing business continuity plans, and conducting due diligence on cloud providers' disaster recovery protocols. Law firms should verify their vendors maintain redundant backup systems, SSAE16 audited data centers, and clear data ownership policies. The outage highlights the critical need for lawyers to understand their cloud infrastructure dependencies and maintain contingency plans for service disruptions.

MTC: Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age

Modern attorneys need to tackle AI ethics and privacy risks.

The legal profession stands at a critical crossroads as generative AI tools like ChatGPT become increasingly integrated into daily practice. While these technologies offer unprecedented efficiency and insight, they also raise urgent questions about client privacy, data security, and professional ethics—questions that every lawyer, regardless of technical proficiency, must confront.

Recent developments have brought these issues into sharp focus. OpenAI, the company behind ChatGPT, was recently compelled to preserve all user chats for legal review, highlighting how data entered into generative AI systems can be stored, accessed, and potentially scrutinized by third parties. For lawyers, this is not a theoretical risk; it is a direct challenge to the core obligations of client confidentiality and the right to privacy.

The ABA Model Rules and Generative AI

The American Bar Association’s Model Rules of Professional Conduct are clear: Rule 1.6 requires lawyers to “act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure”. This duty extends beyond existing clients to former and prospective clients under Rules 1.9 and 1.18. Crucially, the obligation applies even to information that is publicly accessible or contained in public records, unless disclosure is authorized or consented to by the client.

Attorneys need to explain generative AI privacy concerns to client.

The ABA’s recent Formal Opinion 512 underscores these concerns in the context of generative AI. Lawyers must fully consider their ethical obligations, including competence, confidentiality, informed consent, and reasonable fees when using AI tools. Notably, the opinion warns that boilerplate consent in engagement letters is not sufficient; clients must be properly informed about how their data may be used and stored by AI systems.

Risks of Generative AI: PII, Case Details, and Public Data

Generative AI tools, especially those that are self-learning, can retain and reuse input data, including Personally Identifiable Information (PII) and case-specific details. This creates a risk that confidential information could be inadvertently disclosed or cross-used in other cases, even within a closed firm system. In March 2023, a ChatGPT data leak allowed users to view chat histories of others, illustrating the real-world dangers of data exposure.

Moreover, lawyers may be tempted to use client public data—such as court filings or news reports—in AI-powered research or drafting. However, ABA guidance and multiple ethics opinions make it clear: confidentiality obligations apply even to information that is “generally known” or publicly accessible, unless the client has given informed consent or an exception applies. The act of further publicizing such data, especially through AI tools that may store and process it, can itself breach confidentiality.

Practical Guidance for the Tech-Savvy (and Not-So-Savvy) Lawyer

Lawyers can face disciplinary hearing over unethical use of generative AI.

The Tech-Savvy Lawyer.Page Podcast Episode 99, “Navigating the Intersection of Law Ethics and Technology with Jayne Reardon and other The Tech-Savvy Lawyer.Page postings offer practical insights for lawyers with limited to moderate tech skills. The message is clear: lawyers must be strategic, not just enthusiastic, about legal tech adoption. This means:

  • Vetting AI Tools: Choose AI platforms with robust privacy protections, clear data handling policies, and transparent security measures.

  • Obtaining Informed Consent: Clearly explain to clients how their information may be used, stored, or processed by AI systems—especially if public data or PII is involved.

  • Limiting Data Input: Avoid entering sensitive client details, PII, or case specifics into generative AI tools unless absolutely necessary and with explicit client consent.

  • Monitoring for Updates: Stay informed about evolving ABA guidance, state bar opinions, and the technical capabilities of AI tools.

  • Training and Policies: Invest in ongoing education and firm-wide policies to ensure all staff understand the risks and responsibilities associated with AI use.

Conclusion

The promise of generative AI in law is real, but so are the risks. As OpenAI’s recent legal challenges and the ABA’s evolving guidance make clear, lawyers must prioritize privacy, confidentiality, and ethics at every step. By embracing technology with caution, transparency, and respect for client rights, legal professionals can harness AI’s benefits without compromising the foundational trust at the heart of the attorney-client relationship.

MTC

MTC: Florida Bar's Proposed Listserv Rule: A Digital Wake-Up Call for Legal Professionals.

not just Florida Lawyers should be reacting to New Listserv Ethics Rules!

The Florida Bar's proposed Advisory Opinion 25-1 regarding lawyers' use of listservs represents a crucial moment for legal professionals navigating the digital landscape. This proposed guidance should serve as a comprehensive reminder about the critical importance of maintaining client confidentiality in our increasingly connected professional world.

The Heart of the Matter: Confidentiality in Digital Spaces 💻

The Florida Bar's Professional Ethics Committee has recognized that online legal discussion groups and peer-to-peer listservs provide invaluable resources for practitioners. These platforms facilitate contact with experienced professionals and offer quick feedback on legal developments. However, the proposed opinion emphasizes that lawyers participating in listservs must comply with Rule 4-1.6 of the Rules Regulating The Florida Bar.

The proposed guidance builds upon the American Bar Association's Formal Opinion 511, issued in 2024, which prohibits lawyers from posting questions or comments relating to client representations without informed consent if there's a reasonable likelihood that client identity could be inferred. This nationwide trend reflects growing awareness of digital confidentiality challenges facing modern legal practitioners.

National Landscape of Ethics Opinions 📋

🚨 BOLO: florida is not the only state that has rules related to lawyers discussing cases online!

The Florida Bar's approach aligns with a broader national movement addressing lawyer ethics in digital communications. Multiple jurisdictions have issued similar guidance over the past two decades. Maryland's Ethics Opinion 2015-03 established that hypotheticals are permissible only when there's no likelihood of client identification. Illinois Ethics Opinion 12-15 permits listserv guidance without client consent only when inquiries won't reveal client identity.

Technology Competence and Professional Responsibility 🎯

I regularly addresses these evolving challenges for legal professionals. As noted in many of The Tech-Savvy Lawyer.Page Podcast's discussions, lawyers must now understand both the benefits and risks of relevant technology under ABA Model Rule 1.1 Comment 8. Twenty-seven states have adopted revised versions of this comment, making technological competence an ethical obligation.

The proposed Florida rule reflects this broader trend toward requiring lawyers to understand their digital tools. Comment 8 to Rule 1.1 advises lawyers to "keep abreast of changes in the law and its practice," including technological developments. This requirement extends beyond simple familiarity to encompass understanding how technology impacts client confidentiality.

Practical Implications for Legal Practice 🔧

The proposed advisory opinion provides practical guidance for lawyers who regularly participate in professional listservs. Prior informed consent is recommended when there's reasonable possibility that clients could be identified through posted content or the posting lawyer's identit1. Without such consent, posts should remain general and abstract to avoid exposing unnecessary information.

The guidance particularly affects in-house counsel and government lawyers who represent single clients, as their client identities would be obvious in any posted questions. These practitioners face heightened scrutiny when participating in online professional discussions.

Final Thoughts: Best Practices for Digital Ethics

Florida lawyers need to know their state rules before discussing cases online!

Legal professionals should view the Florida Bar's proposed guidance as an opportunity to enhance their digital practice management. The rule encourages lawyers to obtain informed consent at representation's outset when they anticipate using listservs for client benefit. This proactive approach can be memorialized in engagement agreements.

The proposed opinion also reinforces the fundamental principle that uncertainty should be resolved in favor of nondisclosure. This conservative approach protects both client interests and lawyer professional standing in our digitally connected legal ecosystem.

The Florida Bar's proposed Advisory Opinion 25-1 represents more than regulatory housekeeping. It provides essential guidance for legal professionals navigating increasingly complex digital communication landscapes while maintaining the highest ethical standards our profession demands.

MTC

BOLO: LexisNexis Data Breach: What Legal Professionals Need to Know Now—and Why All Lexis Products Deserve Scrutiny!

LAWYERS NEED TO BE BOTH TECH-SAVVY AND Cyber-SavvY!

On December 25, 2024, LexisNexis Risk Solutions (LNRS)—a major data broker and subsidiary of LexisNexissuffered a significant data breach that exposed the personal information of over 364,000 individuals. This incident, which went undetected until April 2025, highlights urgent concerns for legal professionals who rely on LexisNexis and its related products for research, analytics, and client management.

What Happened in the LexisNexis Breach?

Attackers accessed sensitive data through a third-party software development platform (GitHub), not LexisNexis’s internal systems. The compromised information includes names, contact details, Social Security numbers, driver’s license numbers, and dates of birth. Although LexisNexis asserts that no financial or credit card data was involved and that its main systems remain secure, the breach raises red flags about the security of data handled across all Lexis-branded platforms.

Why Should You Worry About Other Lexis Products?

LexisNexis Risk Solutions is just one division under the LexisNexis and RELX umbrella, which offers a suite of legal, analytics, and data products widely used by law firms, courts, and corporate legal departments. The breach demonstrates that vulnerabilities may not be limited to one product or platform; third-party integrations, development tools, and shared infrastructure can all present risks. If you use LexisNexis for legal research, client intake, or case management, your clients’ confidential data could be at risk—even if the breach did not directly affect your specific product.

Ethical Implications: ABA Model Rules of Professional Conduct

ALL LawyerS NEED TO BE PREPARED TO FighT Data LeakS!

The American Bar Association’s Model Rules of Professional Conduct require lawyers to safeguard client information and maintain competence in technology. Rule 1.6(c) mandates that attorneys “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Rule 1.1 further obligates lawyers to keep abreast of changes in law and its practice, including the benefits and risks associated with relevant technology.

In light of the LexisNexis breach, lawyers must:

  • Assess the security of all third-party vendors, including legal research and data analytics providers.

  • Promptly notify clients if their data may have been compromised, as required by ethical and sometimes statutory obligations.

  • Implement additional safeguards, such as multi-factor authentication and regular vendor risk assessments.

  • Stay informed about ongoing investigations and legal actions stemming from the breach.

What Should Legal Professionals Do Next?

  • Review your firm’s use of LexisNexis and related products.

  • Ask vendors for updated security protocols and breach response plans.

  • Consider offering affected clients identity protection services.

  • Update internal policies to reflect heightened risks associated with third-party platforms.

The Bottom Line

The LexisNexis breach is a wake-up call for the legal profession. Even if your primary Lexis product was not directly affected, the interconnected nature of modern legal technology means your clients’ data could still be at risk. Proactive risk management and ethical vigilance are now more critical than ever.

🚨 BOLO: Android Ad Fraud Malware and Your ABA Ethical Duties – What Every Lawyer Must Know in 2025 🚨

Defend Client Data from Malware!

The discovery of the “Kaleidoscope” ad fraud malware targeting Android devices is a wake-up call for legal professionals. This threat, which bombards users with unskippable ads and exploits app permissions, is not just an annoyance - it is a direct risk to client confidentiality, law firm operations, and compliance with the ABA Model Rules of Professional Conduct. Lawyers must recognize that cybersecurity is not optional; it is an ethical mandate under the ABA Model Rules, including Rules 1.1, 1.3, 1.4, 1.6, 5.1, and 5.3.

Why the ABA Model Rules Matter

  • Rule 1.6 (Confidentiality): Lawyers must make reasonable efforts to prevent unauthorized disclosure of client information. A compromised device can leak confidential data, violating this core duty.

  • Rule 1.1 (Competence): Competence now includes understanding and managing technological risks. Lawyers must stay abreast of threats like Kaleidoscope and take appropriate precautions.

  • Rule 1.3 (Diligence): Prompt action is required to investigate and remediate breaches, protecting client interests.

  • Rule 1.4 (Communication): Lawyers must communicate risks and safeguards to clients, including the potential for data breaches and the steps being taken to secure information.

  • Rules 5.1 & 5.3 (Supervision): Law firm leaders must ensure all personnel, including non-lawyers, adhere to cybersecurity protocols.

Practical Steps for Lawyers – Backed by Ethics and The Tech-Savvy Lawyer.Page

Lawyers: Secure Your Practice Now!

  • Download Only from Trusted Sources: Only install apps from the Google Play Store, leveraging its built-in protections. Avoid third-party stores, the main source of Kaleidoscope infections.

  • Review App Permissions: Be vigilant about apps requesting broad permissions, such as “Display over other apps.” These can enable malware to hijack your device.

  • Secure Devices: Use strong, unique passwords, enable multi-factor authentication, and encrypt devices-simple but essential steps emphasized by our blog posts on VPNs and ABA guidance.

  • Update Regularly: Keep your operating system and apps up to date to patch vulnerabilities.

  • Educate and Audit: Train your team about mobile threats and run regular security audits, as highlighted in Cybersecurity Awareness Month posts on The Tech-Savvy Lawyer.Page.

  • Incident Response: Have a plan for responding to breaches, as required by ABA Formal Opinion 483 and best practices.

  • Communicate with Clients: Discuss with clients how their information is protected and notify them promptly in the event of a breach, as required by Rule 1.4 and ABA opinions.

  • Label Confidential Communications: Mark sensitive communications as “privileged” or “confidential,” per ABA guidance.

Advanced Strategies

Lawyers need to have security measures in place to protect client data!

  • Leverage AI-Powered Security: Use advanced tools for real-time threat detection, as recommended by The Tech-Savvy Lawyer.Page.

  • VPN and Secure Networks: Avoid public Wi-Fi. But if/when you do be sure to use VPNs (see The Tech-Savvy Lawyer.Page articles on VPNs) to protect data in transit.

  • Regular Backups: Back up data to mitigate ransomware and other attacks.

By following these steps, lawyers fulfill their ethical duties, protect client data, and safeguard their practice against evolving threats like Kaleidoscope.

MTC: Legal Cybersecurity Crisis - How the CVE System's Defunding Compromises Digital Safety for Law Firms 🚨

In the chaos, Lawyers need to defend client data as CVE shield may be in jeopardy!

CVE Program’s Last-Minute Rescue: What Lawyers Must Learn from the Cybersecurity Near-Crisis 🚨

The legal world narrowly avoided a digital disaster last week week. The Common Vulnerabilities and Exposures (CVE) program—the backbone of global cybersecurity—came within hours of losing its federal funding, sending shockwaves through the legal and cybersecurity communities. In an eleventh-hour move, the Cybersecurity and Infrastructure Security Agency (CISA) extended funding for MITRE to continue operating the CVE program, averting a shutdown that could have left law firms and their clients exposed to unprecedented cyber risk. The episode is a wake-up call for every legal professional: Our reliance on a single, government-funded system for vulnerability intelligence is a vulnerability in itself.

The Alarm: How Close We Came to Losing the CVE Program ⚠️

On April 16, 2025, MITRE, the non-profit that manages the CVE database, announced its contract with the Department of Homeland Security would expire at midnight. The news triggered widespread alarm across the cybersecurity sector, as the CVE program is essential for tracking, cataloging, and sharing information about software vulnerabilities. Legal technology vendors, law firm IT teams, and risk managers all depend on CVE data to prioritize security updates and defend against cyber threats.

The potential consequences were immediate and severe. Experts warned that a lapse in CVE services would delay vulnerability disclosures, disrupt incident response, and create a dangerous window for attackers to exploit unpatched systems. Law firms, which handle highly sensitive client information, would have faced heightened risks of data breaches, malpractice claims, and regulatory penalties.

The Save: CISA Steps In—But Only for Now

CISA’s rescue: Legal cybersecurity lifeline survives—uncertainty remains.

In response to the outcry, CISA executed a last-minute contract extension, ensuring there would be no interruption in CVE services for at least the next 11 months. MITRE confirmed that the funding would keep the program running, and the global cybersecurity community breathed a collective sigh of relief.

Yet, this solution is temporary. The extension lasts less than a year, and the long-term sustainability of the CVE program remains uncertain. The episode has already spurred the formation of a new nonprofit, the CVE Foundation, aimed at ensuring the program’s independence and stability beyond government sponsorship.

Why This Matters for Lawyers and Law Firms ⚖️

The CVE program is more than a technical tool—it is a legal lifeline. The American Bar Association’s Model Rules require lawyers to safeguard client confidentiality, maintain technological competence, and supervise staff and vendors on cybersecurity practices. See MRPC 1.1[8] & 1.6. Without reliable, up-to-date vulnerability intelligence, law firms cannot meet these obligations.

If the CVE program had gone dark, lawyers would have faced:

  • Increased risk of data breaches: Without a unified system for tracking vulnerabilities, attackers would have more time and opportunity to exploit unpatched systems, putting client data at risk.

  • Malpractice exposure: Failing to implement timely security updates could be seen as a breach of the duty of competence and confidentiality, opening the door to claims of negligence or breach of fiduciary duty.

  • Compliance headaches: With regulatory requirements around breach notification and data protection tightening, law firms would struggle to demonstrate they had taken “reasonable efforts” to protect client information.

  • Vendor management chaos: Many legal technology providers rely on CVE identifiers to communicate security patches. Without them, law firms would face confusion and delays in applying critical updates.

Lessons Learned: What Lawyers Should Do Next 🛡️

The CVE funding scare revealed that even the most established cybersecurity programs can be vulnerable. For the legal profession, this is a clear signal to take proactive steps:

Lawyers have a duty to protect their clients’ PII from cyberattacks!

  • Diversify threat intelligence sources: Don’t rely solely on the CVE program. Lawyers and IT teams should monitor additional resources such as the National Vulnerability Database (NVD), CISA Alerts & Advisories, and vendor-specific feeds.

  • Review and update incident response plans: Ensure your breach response protocols account for the possibility of disruptions in vulnerability intelligence. Document your reliance on CVE and alternative sources for compliance purposes.

  • Strengthen vendor contracts: Require legal technology providers to maintain robust vulnerability management practices, even if the CVE system is disrupted.

  • Stay engaged and advocate: Support efforts to make the CVE program sustainable and independent. The legal community should join calls for diverse funding and governance to avoid future crises.

  • Educate staff and clients: Communicate the importance of cybersecurity vigilance and the evolving landscape. Make sure everyone understands their role in protecting client data.

Final Thoughts: A Fragile Peace and a Call for Vigilance 🔍

The CVE program’s last-minute rescue is a relief, but not a resolution. The legal sector must recognize that the stability of our cybersecurity infrastructure is not guaranteed. With only 11 months of assured funding, the risk of another crisis looms. The new CVE Foundation may provide a path forward, but it will require broad support from both public and private sectors.

Lawyers must remain vigilant, proactive, and informed. The next funding scare could come with less warning—and with even higher stakes for client confidentiality, professional responsibility, and the very trust that underpins the legal profession.

MTC

Shout Out: Unlock AI's Potential with Ethics Expertise 🚀💻

Keep up with the constant changing world of AI and Legal ethics!

In the rapidly evolving legal landscape, embracing artificial intelligence (AI) is no longer optional—it's essential 🌟. The upcoming AI and Legal Ethics: A Risk-Benefit Analysis 2025 webinar offers a unique opportunity for lawyers to enhance their practice while navigating the complex ethical considerations surrounding AI use 📚. Join Hilary P. Gerzhoy and Julienne Pasichow of HWG LLP delve into critical areas such as competence in technology, supervision of AI tools, reasonable fees, confidentiality, truth in advertising, and client communication standards 📊.

By attending this webinar, you'll gain practical insights into integrating AI responsibly, ensuring compliance with professional conduct standards, and staying ahead in the legal tech revolution 🚀. Whether you're looking to improve efficiency, enhance client services, or simply stay updated on the latest legal tech trends, this event is a must-attend for any forward-thinking lawyer 🚀.

You can attend this information packed cle virtually!

Don't miss out! Register now and elevate your legal practice with AI expertise 💻👉 https://dcbar.inreachce.com/Details/Information/11092a42-cde2-426d-9dbf-25f270b2df09

I hope to “virtually” see you there!

Shout Out to Robert Ambrogi: AI Legal Research Platforms - A Double-Edged Sword for Tech-Savvy Lawyers 🔍⚖️

The use of ai is a great starting point - but always check your work (especially your citations)!

Robert Ambrogi's recent article on LawNext sheds light on a crucial development in legal tech: the comparison of AI-driven legal research platforms. This "AI smackdown" reveals both the potential and pitfalls of these tools, echoing concerns raised in our previous editorial about Lexis AI's shortcomings.

The Southern California Association of Law Libraries' panel, featuring expert librarians, put Lexis+AI, Westlaw Precision AI, and vLex's Vincent AI to the test. Their findings? While these platforms show promise in answering basic legal questions, they're not without flaws.

Each platform demonstrated unique strengths: Lexis+AI's integration with Shepard's, Westlaw Precision AI's KeyCite features, and Vincent AI's user control options. However, inconsistencies in responses to complex queries and recent legislation underscore a critical point: AI tools are supplements, not replacements, for thorough legal research.

This evaluation aligns with our earlier critique of Lexis AI, reinforcing the need for cautious adoption of AI in legal practice. As the technology evolves, so must our approach to using it.

Mark Gediman's wise words from Bob’s article serve as a fitting conclusion:

Whenever I give the results to an attorney, I always include a disclaimer that this should be the beginning of your research, and you should review the results for relevance and applicability prior to using it, but you should not rely on it as is.
— Mark Gediman

For tech-savvy lawyers, the message is clear: Embrace AI's potential, but never forget the irreplaceable value of human expertise and critical thinking in legal research. 🧠💼

MTC

MTC: AI in Legal Email - Balancing Innovation and Ethics 💼🤖

lawyers have an ethical duty when using ai in their work!

The integration of AI into lawyers' email systems presents both exciting opportunities and significant challenges. As legal professionals navigate this technological frontier, we must carefully weigh the benefits against potential ethical pitfalls.

Advantages of AI in Legal Email 📈

AI-powered email tools offer numerous benefits for law firms:

  • Enhanced efficiency through automation of routine tasks

  • Improved client service and satisfaction

  • Assistance in drafting responses and suggesting relevant case law

  • Flagging important deadlines

  • Improved accuracy in document review and contract analysis

These capabilities allow lawyers to focus on high-value work, potentially improving outcomes for clients and minimizing liabilities for law firms.

AI Email Assistants 🖥️

Several AI email assistants are available for popular email platforms:

  1. Microsoft Outlook:

    • Copilot for Outlook: Enhances email drafting, replying, and management using ChatGPT.

  2. Apple Mail:

  3. Gmail:

    • Gemini 1.5 Pro: Offers email summarization, contextual Q&A, and suggested replies.

  4. Multi-platform:

Always Proofread Your Work and Confirm Citations!

🚨

Always Proofread Your Work and Confirm Citations! 🚨

Ethical Considerations and Challenges 🚧

Confidentiality and Data Privacy

The use of AI in legal email raises several ethical concerns, primarily regarding the duty of confidentiality outlined in ABA Model Rule 1.6. Lawyers must ensure that AI systems do not compromise client information or inadvertently disclose sensitive data to unauthorized parties.

To address this:

lawyers should always check their work; especially when using AI!

  1. Implement robust data security measures

  2. Understand AI providers' data handling practices

  3. Review and retain copies of AI system privacy policies

  4. Make reasonable efforts to prevent unauthorized disclosure

Competence (ABA Model Rule 1.1)

ABA Model Rule 1.1, particularly Comment 8, emphasizes the need for lawyers to understand the benefits and risks associated with relevant technology. This includes:

  • Understanding AI capabilities and limitations

  • Appropriate verification of AI outputs (Check Your Work!)

  • Staying informed about changes in AI technology

  • Considering the potential duty to use AI when benefits outweigh risks

The ABA's Formal Opinion 512 further emphasizes the need for lawyers to understand the AI tools they use to maintain competence.

Client Communication

Maintaining the personal touch in client communications is crucial. While AI can streamline processes, it should not replace nuanced, empathetic interactions. Lawyers should:

  1. Disclose AI use to clients

  2. Address any concerns about privacy and security

  3. Consider including AI use disclosure in fee agreements or retention letters

  4. Read your AI-generated/assisted drafts

Striking the Right Balance ⚖️

To ethically integrate AI into legal email systems, firms should:

  1. Implement robust data security measures to protect client confidentiality

  2. Provide comprehensive training on AI tools to ensure competent use

  3. Establish clear policies on when and how AI should be used in client communications

  4. Regularly review and audit AI systems for accuracy and potential biases

  5. Maintain transparency with clients about the use of AI in their matters

  6. Verify that AI tools are not using email content to train or improve their algorithms

Ai is a tool for work - not a replacement for final judgment!

By carefully navigating ⛵️ these considerations, lawyers can harness the power of AI to enhance their practice while upholding their ethical obligations. The key lies in viewing AI as a tool to augment 🤖 human expertise, not replace it.

As the legal profession evolves, embracing AI in email and other systems will likely become essential for remaining competitive. However, this adoption must always be balanced against the core ethical principles that define the practice of law.

And Remember, Always Proofread Your Work and Confirm Citations BEFORE Sending Your E-mail (w Use of AI or Not)!!!