CUI and Beyond

  • Published
  • By Lieutenant Colonel Jeremy P. DeLaCerda
This article does not constitute legal advice and should not be construed as such. The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.

 

This article contains copyrighted images. Many of the images on this Site are purchased from stock agencies or provided by other resources that make them copyright protected. Those images will be marked as copyright protected with the name of the copyright holder. Our use does not convey any rights to others to use the same material. Those wishing to use copyright protected material must contact the copyright holder directly.

CUI and Beyond:
Navigating Large Language Models, Client Confidentiality, and Controlled Unclassified Information in Military Legal Practice

As DoD-approved LLMs become more capable and prevalent, an increasing number of situations will present themselves wherein attorneys and paralegals believe using an LLM will be in their clients’ best interest.

Military Legal Practice: A Dual Profession

Military attorneys and paralegals are dual professionals, operating simultaneously in the profession of law and profession of arms. Both vocations have long recognized the value of information and prescribed how to handle and safeguard it, whether from adversaries in the courtroom or on the battlefield. The vast digitization of information in recent years has only increased the criticality of data protection. One frontline in the contemporary battle to protect information is in the field of Large Language Models (LLMs). This article examines the informational risks posed by LLMs in military legal practice. It first describes these risks and explains how both the legal and military fields have responded. Although the professions have similar overarching goals of protecting information, their requirements differ, and following the rules of one does not guarantee compliance with those of the other. Accordingly, this article outlines how the two sets of professional duties interact and concludes by providing a practical framework for navigating LLMs in a manner that meets the requirements of both professions.

LLMs and Sensitive Information

LLMs—like OpenAI’s ChatGPT, Anthropic’s Claude, and Meta’s Llama—are some of the most ubiquitous forms of Artificial Intelligence (AI) used today. These models can aid legal practice by accelerating routine tasks, from drafting emails and documents to developing opening statements and closing arguments, but multiple risks accompany these benefits, including the risk of compromising sensitive information.[1]

LLMs bring two levels of informational risk. The first—shared with many other Information Technology (IT) systems—involves how securely (or not) data is stored, accessed, and used. Typical issues include whether data at rest is adequately protected; how easily individuals, businesses, or foreign governments can access data in the normal course of business; whether information is sold or transferred to third parties; and the likelihood of terms of service changing to grant unanticipated third-parties access in the future. The DeepSeek chatbot, launched in the U.S. in January 2025, illustrates this group of concerns. Its data is stored on servers in China, causing apprehension among U.S. policymakers that the Chinese government might have access to information entered by American users.[2] Compounding this concern, shortly after the chatbot’s release, researchers identified a security gap exposing more than a million records—including user inputs—to public view.[3]

 
Some AI companies have taken steps to further mitigate concerns over use of inputs. For example, OpenAI allows users to opt out of using their prompts for training.
 

The second level of risk is unique to AI-enabled tools and relates to how user prompts are used. LLMs are examples of Generative AI, models that encode relationships among large amounts of training data and use those relationships to calculate the most probable responses to user prompts.[4] To help refine these models, coders enable some LLMs to incorporate user prompts into their training data.[5] Through this process, it is theoretically possible for one user’s input—including any sensitive information it contains—to show up in an LLM’s responses to other users. In practice, this risk is often reduced by conducting much of an LLM’s development during a controlled “pretraining” stage before the model is opened to users.[6] Some AI companies have taken steps to further mitigate concerns over use of inputs. For example, OpenAI allows users to opt out of using their prompts for training, and Anthropic requires users to opt in before their information is used to train its model.[7] Although the informational risk may be diminished in some models, it is vital that users research LLMs’ terms of service and privacy policies to fully understand how the information in their prompts may be used, both today and in the future.

LLMs and Professional Duties

While anyone using LLMs should be aware of their risks, military attorneys and paralegals must be especially vigilant, as these risks implicate both their legal duty of confidentiality and military duty to protect Controlled Unclassified Information (CUI). The legal duty of confidentiality generally prohibits disclosing “information relating to the representation of a client.”[8] At its core, this rule prevents knowing and intentional release of client-related information, but it also mandates taking “reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[9] This duty applies in all contexts but is particularly relevant in the digital world.[10]

Military legal practice carries an additional duty to protect government information. Obviously, this protection covers classified material, but more applicable to daily practice is the requirement to protect CUI, which includes a wide range of unclassified executive branch information that is subject to safeguarding from unauthorized access.[11] Military lawyers and paralegals routinely handle many types of CUI, including legal documents; law enforcement reports; and personnel, financial, and health records.[12] To help defend CUI from improper access, DoD Instruction (DoDI) 5200.48, Controlled Unclassified Information (CUI), requires Department of Defense (DoD) personnel to conduct government business involving CUI only on DoD or approved contractor IT systems.[13]

To summarize the problem, LLMs carry risks that sensitive information will be disclosed to third parties and used beyond its intended purpose; when this occurs in military legal practice, it can implicate both the legal duty to preserve confidential client information and the military duty to protect CUI. With this in mind, we turn to how the legal and military professions have responded to these concerns.

 
Many legal organizations see LLMs as posing unique risks to client data, requiring tailored, detailed guidance on usage.
 

The Legal Response

Legal organizations have actively responded to LLMs’ risks, with the American Bar Association, several state bars, and some courts providing guidance. While these bodies all agree on the need to maintain client confidentiality when using LLMs, their recommended methods for doing so are not uniform. Thus—as with all professional responsibilities—lawyers must reference their jurisdictions’ current guidance when deciding whether and how to use LLMs. Though specific guidelines vary, the following points are prevalent:

  • Before using an LLM, lawyers must fully understand its data security practices, including how information is stored, secured, and used; who has access to it; and who might be given access in the future.[14]
  • If, based on an LLM’s security policies, attorneys are not confident their clients’ information will be secure, they should not use the LLM.[15]
  • Lawyers should receive clients’ informed consent before inputting their confidential data into an LLM.[16]

The various legal responses also reveal two broad themes. First, many legal organizations see LLMs as posing unique risks to client data, requiring tailored, detailed guidance on usage. Second, the legal profession is skeptical that LLMs can adequately preserve confidentiality, especially when the LLM is commercially available—subject to changing terms of service, business practices, and ownership—rather than a closed, in-house design.[17] Military practitioners, in addition to reviewing their jurisdictions’ professional legal guidance, must perform the additional step of ensuring LLMs do not compromise government CUI.

The Military Response

The framework for using LLMs with CUI is clear-cut: CUI may only be processed on DoD or approved contractor IT systems. Given this relative simplicity, most military LLM policies for protecting sensitive information are written in general terms and do not offer the same level of detail as civilian legal confidentiality guidance. For example, the Army broadly states that LLMs are “subject to existing legal, cybersecurity, information, operational security, and classification policies, as well as [Generative] AI-specific policy.”[18] The Navy, similarly, affirms that DoDI 5200.48—the DoD’s basic CUI instruction—applies to LLMs, though it also recommends that commercial LLMs not be employed for any operational uses, even where CUI is not involved.[19] Therefore, before using an LLM, military members must ensure not only that it is DoD-approved for CUI, but also that additional military restrictions—such as the Navy’s “no operational use” rule—do not prohibit its use.

 
 
02:00
VIDEO | 02:00 | Air Force Doctrine Note 25-1 Artificial Intelligence
 
 

LLMs in the Dual-Professional Environment

A core implication of the dual-professional nature of military legal practice is that any time military lawyers or paralegals use LLMs, they must simultaneously comply with both sets of professional duties. And a key insight from the legal and military professions’ responses to LLMs is that complying with one set of duties does not ensure compliance with the other. For example, an attorney could comply with her jurisdiction’s legal duty to ensure her client’s confidential information is protected, yet if the input contained any CUI, she still could not enter it into a commercial LLM. Conversely, a paralegal could have access to an LLM the DoD has approved for CUI; but if its privacy policy does not ensure his client’s confidentiality, its use would remain prohibited. Thus, protecting sensitive information in military legal practice requires analyzing both military and legal duties before proceeding. What does this look like in practice?

CUI

The first step is straightforward. Is CUI involved? If so, DoDI 5200.48 mandates using an LLM authorized by the DoD. Fortunately, the military services have been deploying their own LLMs, including the Air Force’s NIPRGPT and the Army’s CamoGPT (whose website indicates it is approved for CUI).[20] These, and similar DoD tools, are still in experimental phases and may not offer much assistance now, but given the rate of change in this area, their capabilities and usefulness are likely to continue growing. Soon, practitioners are likely to have access to multiple capable, CUI-compliant LLMs.

 
Just because an LLM is approved for CUI does not necessarily mean it will preserve confidentiality.
 

Confidentiality

Next, the duty of confidentiality requires complying with legal professional responsibility rules—including AI-specific guidance—in users’ jurisdictions. At a minimum, this will require LLMs to have clear terms of service that ensure third parties cannot access information input by users. Just because an LLM is approved for CUI does not necessarily mean it will preserve confidentiality; even within a CUI-compliant IT system other DoD members might have access to prior inputs used as training data or see them incorporated in responses to their own queries. This could damage confidentiality just as much as revealing information to the general public, or cause even more damage given the military’s close-knit structure. For example, in each service’s Judge Advocate General’s Corps coexist both prosecutors and defense attorneys for the same cases, who certainly should not have access to each other’s LLM prompts.

Fulfilling the legal duty of confidentiality will also help give necessary increased protection to specific types of CUI. Information military legal offices regularly handle, such as law enforcement reports, Personally Identifiable Information (PII), and Protected Health Information (PHI), must not only be stored on CUI-capable systems, but also only accessed and viewed by authorized persons.[21] What does this look like in an LLM? A good frame of reference is the Army’s CalibrateAI, a pilot program exploring the use of Generative AI for contracting activities.[22] It “includes customizable user-access controls to protect ‘need to know’ information, ensuring that data security and confidentiality are paramount.”[23] This individualized, “need to know” protection is necessary to ensure confidentiality when using LLMs within military legal practice.

 
The more common and capable LLMs become, the more lawyers and paralegals will need to evaluate whether not using them might violate their duty to competently represent their clients.
 

No Shortcuts

There are no shortcuts to conducting both CUI and confidentiality analyses, as becomes evident when looking at potential alternatives. One might try to strip confidential data and CUI before inputting information into an LLM to neutralize the risk of revealing sensitive data. The line between protected and unprotected information can be ambiguous, however, leaving a risk of inadvertently inputting sensitive information. Additionally, the more sanitized and abstract input becomes, the less helpful an LLM’s output will be (e.g., how can an LLM produce a useful first draft of an argument without the specific facts of the case?). It will always be tempting to enter additional details to get more valuable results, and as the factual granularity increases, so does the risk of revealing confidential information or CUI.

Another potential shortcut is to simply forgo LLMs altogether in military legal practice, and indeed this may be the most prudent option in many cases today. But even this cautious course carries risks. Attorneys have a duty to keep up with technological changes, including AI.[24] The more common and capable LLMs become, the more lawyers and paralegals will need to evaluate whether not using them might violate their duty to competently represent their clients.[25] Thus, LLM usage may be delayed, but probably not avoided. When they encounter LLMs, practitioners must directly address the two-part analysis of ensuring both their legal and military duties are met.

The Road Ahead

As DoD-approved LLMs become more capable and prevalent, an increasing number of situations will present themselves wherein attorneys and paralegals believe using an LLM will be in their clients’ best interest. Being mindful of the dual-professional nature of military legal practice and the two-step analysis to protect sensitive information will allow them to approach LLMs with confidence, knowing they can harness the benefits of these tools while protecting their clients’ data. The AI landscape is likely to continue changing, and practitioners will need to keep abreast of emerging professional legal and military guidance. The details may change, but the foundational duties to protect confidential client information and executive branch CUI will remain.


About the Author

 

Lieutenant Colonel Jeremy P. DeLaCerda

(B.M., Centenary College of Louisiana; J.D., University of Illinois College of Law) is a legal advisor for Air Force Global Strike Command. He is admitted to practice law before the Supreme Court of Illinois, the Air Force Court of Criminal Appeals, and the United States Court of Appeals for the Armed Forces.
Edited by: Lieutenant Colonel Kurt H. Eberle
Layout by: Thomasa Huffstutler
 

Endnotes

[1] See N.Y. State Bar Ass’n, Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence 19–47 (2024), https://fingfx.thomsonreuters.com/gfx/legaldocs/znpnkgbowvl/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf.
[2] Bobby Allyn, International Regulators Probe How DeepSeek is Using Data. Is the App Safe to Use?, NPR (Jan. 31, 2025), https://www.npr.org/2025/01/31/nx-s1-5277440/deepseek-data-safety.
[3] Lily Hay Newman & Matt Burgess, Exposed DeepSeek Database Revealed Chat Prompts and Internal Data, Wired (Jan. 29, 2025), https://www.wired.com/story/exposed-deepseek-database-revealed-chat-prompts-and-internal-data.
[4] Adam Zewe, Explained: Generative AI, MIT News (Nov. 9, 2023), https://news.mit.edu/2023/explained-generative-ai-1109.
[5] See, e.g., How Your Data is Used to Improve Model Performance, OpenAI, https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance (last visited Mar. 18, 2025).
[6] See, e.g., Notice on Model Training, Anthropic (Feb. 19, 2025), https://www.anthropic.com/legal/model-training-notice.
[7] How Your Data is Used, supra note 5; Notice on Model Training, supra note 6.
[9] Id.
[12] CUI Categories and Abbreviations, DoD CUI Program, https://www.dodcui.mil/CUI-Categories-and-Abbreviations (last visited Mar. 18, 2025) (listing CUI categories).
[13] Dep’t of Def. Instruction 5200.48, Controlled Unclassified Information (CUI) para. 3.10.b (Mar. 6, 2020), https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/520048p.PDF.
[14] See, e.g., ABA Comm. on Ethics & Pro. Resp., Formal Op. 512, 7 (2024), https://www.americanbar.orgcontent/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf; State Bar of Cal. Standing Comm. on Pro. Resp. & Conduct, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law 2, https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf (last visited Mar. 18, 2025).
[15] See, e.g., N.J. Courts, Legal Practice: Preliminary Guidelines on New Jersey Lawyers’ Use of Artificial Intelligence 5 (Jan. 24, 2024), https://www.njcourts.gov/sites/default/files/notices/2024/01/n240125a.pdf?cb=aac0e368; State Bar of Cal., supra note 14, at 2.
[16] See, e.g., ABA Comm. on Ethics & Pro. Resp., supra note 14, at 7; N.Y. State Bar. Ass’n, supra note 1, at 58.
[17] See, e.g., Florida Bar Ethics Opinion, Opinion 24-1, at 3–4 (Jan. 19, 2024), https://www.floridabar.org/etopinions/opinion-24-1.
[18] Dep’t of the Army, ADS-GOV-AI-024, Chief Information Officer Guidance on Generative Artificial Intelligence and Large Language Models (June 27, 2024), https://www.dau.edu/sites/default/files/webform/documents/27066/Army%20CIO%20Guidance%20on%20Gen%20AI%20and%20LLM_20240627%20%28003%29.pdf.
[19] Dep’t of the Navy, Department Of the Navy Guidance on the Use of Generative Artificial Intelligence and Large Language Models (Sept. 6, 2023), https://www.doncio.navy.mil/ContentView.aspx?ID=16442.
[20] Army CAMO GPT, Tradewinds, https://www.tradewindai.com/army-camo-gpt (last visited July 11, 2025); Sec’y of Air Force Public Affairs, Department of the Air Force Launches NIPRGPT, U.S. Air Force (June 10, 2024), https://www.af.mil/News/Article-Display/Article/3800809/department-of-the-air-force-launches-niprgpt.
[21] See, e.g., Air Force Instr. 33-332, Air Force Privacy and Civil Liberties Program para. 7.1.2 (May 12, 2020), https://static.e-publishing.af.mil/production/1/saf_cn/publication/afi33-332/afi33-332.pdf (regarding PII); Air Force Manual 41-210, Tricare Operations and Patient Administration § 4A (June 22, 2021), https://static.e-publishing.af.mil/production/1/af_sg/publication/afman41-210/afman41-210.pdf (regarding PHI); Dep’t of the Air Force Instr. 71-101 Vol. 1, Criminal Investigations Program para. 1.5.1.2 (Jan. 24, 2025), https://static.e-publishing.af.mil/production/1/saf_ig/publication/dafi71-101v1/dafi71-101v1.pdf (regarding law enforcement reports).
[22] U.S. Army Pub. Affairs, Army Launches Pilot to Explore Generative AI for Acquisition Activities, U.S. Army (Oct. 18, 2024), https://www.army.mil/article/280500/army_launches_pilot_to_explore_generative_ai_for_acquisition_activities.
[23] Id.
[24] See generally ABA Comm. on Ethics & Pro. Resp., supra note 10 (discussing the duty of competence as it relates to technology).
[25] See N.Y. State Bar. Ass’n, supra note 1, at 29.
 

 

 
This article does not constitute legal advice and should not be construed as such. The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.