top of page
  • Writer's pictureWilliam Hussey

AI: An Exciting New Tool for Estate Attorneys? Proceed with Caution!

Updated: Mar 25

For my entire career, continuing technological advances have changed the way in which we interact with our clients and go about our daily business as attorneys.  Although I was an early adopter of computers, electronic legal research, cell phones and the like, all of us are now regular users of such technological innovations.  The next iteration of these technological advances is the rise over the past decade in the availability and use of so-called Artificial Intelligence (“AI”) software programs to increase the speed, and often also the accuracy, of our legal work.  The pace of AI innovation has greatly accelerated over the last 12 to 18 months with the introduction of more accurate Large Language Model AI programs such as ChatGPT 4.0 and the pending launch of newly integrated AI products from companies such as Westlaw and LexisNexis.

What is AI exactly?

Per Microsoft’s Copilot (itself a form of so-called weak AI):

AI stands for Artificial Intelligence, which is the branch of computer science that deals with creating machines or software that can perform tasks that normally require human intelligence, such as reasoning, learning, decision making, natural language processing, computer vision, and more. AI can be classified into two main categories: weak AI and strong AI. Weak AI, also known as narrow AI, is designed to perform a specific task or function, such as playing chess, recognizing faces, or translating languages. Strong AI, also known as general AI or artificial general intelligence (“AGI”), is the hypothetical goal of creating machines or software that can exhibit human-like intelligence across a wide range of domains and tasks, such as understanding emotions, forming beliefs, and having self-awareness. Currently, most of the AI applications are based on weak AI, while strong AI remains a distant and elusive vision.

AI in estate planning

In short, there is no army of sentient robots that is ready to replace all the estate planning professionals and paraprofessionals that may be reading this article.  What is available to us are AI computer software programs that may, but not always will, allow us to service our clients in new and more efficient ways while reducing the amount of time that we too often spend doing routine tasks that might be better performed by these new AI programs. 

As estate practitioners adopt these AI tools, they should nevertheless proceed with a healthy dose of caution as well.

There have already been several well publicized cases of lawyers being sanctioned by state courts for erroneous case citations in motions and briefs, produced in large part by ChatGPT, that were filed by those attorneys with the courts.  A simple search of the Internet quickly turns up the cases in New York and Colorado highlighting the potential frailties of the widely available free AI products.  In fact, these “hallucinations” (i.e., false information) contained in some AI responses to user inquiries is now widely known.  This, of course, begs the question of what exactly are our ethical obligations as we begin to integrate this new technology into our everyday practices?

Ethical Rules and AI

The Model Rules of Professional Conduct were promulgated by the American Bar Association in 1983 (the “Model Rules”).  Although they were updated in 2012 to deal with the increasing use of computers and other technological advances in the legal profession, they do not directly address the use of AI programs.  The Rules of Professional Conduct (“RPC”) were adopted by Order of the Supreme Court of Pennsylvania on October 16, 1987, and became effective on April 1, 1988.  Neither the RPC as adopted in Pennsylvania nor the Model Rules as adopted in other states directly address the use of AI programs (yet).  The Model Rules and the RPC in Pennsylvania nevertheless contain sufficient principles governing an attorney’s conduct to provide guidance on the use of AI programs and products going forward.  Adopters of such technological innovation would be wise to familiarize themselves with these rules and apply them vigorously going forward, given the problematic issues already revealed with the use of current AI offerings such as ChatGPT.  As stated in the RPC Preambles, “[i]n all professional functions a lawyer should be competent, prompt and diligent.”  Thus, the following is an abbreviated roadmap to navigating these new waters.

Rule 1.1 – Attorney Competence

RPC Rule 1.1 provides that “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”  The ethical obligations of Rule 1.1 can arguably be divided into two separate duties with respect to the adoption of AI technologies.

1.      An attorney (whether directly or through a staff member) has an obligation to understand, at least at a minimal level, the operations and functions of the AI program that they choose to use.  Such understanding need not rise to the level of a software engineer.  Rather, at a bare minimum, the attorney should understand at a basic level how data is input into the AI program, how the AI program uses that data and other information to produce its output, and the other information that the AI program is relying upon to produce such results. 

2.      Attorneys and their staff members cannot rely blindly on the output from these AI programs.  While the accuracy and reliability continue to improve at a significant pace, AI programs are not infallible.  As I (along with many others) am fond of saying, “Garbage in, garbage out!”  The inherent unreliability of current offerings requires diligence by the users who choose to employ it.  Otherwise, you may inadvertently find yourself facing a disciplinary proceeding or sanctions from a judge like those attorneys did in Colorado and New York.

For example, ChatGPT relies upon millions of data points that are sourced from the Internet.  While much of that information may be accurate, at least some portion of it will not be so.  The problem of “hallucination” by these AI products must also be recognized.  While these concerns are likely to be lessened or even eliminated somewhat by closed AI programs such as those in development by Westlaw and LexisNexis (among many others), the user must recognize and appreciate those distinctions before relying on the AI program’s output.  RPC Rule 1.1 requires no less.

Comment 8 to Rule 1.1 – Technical Competence

The requirement that we, as attorneys, display some level of technical competence with respect to the technologies we employ in our practice is buttressed by the promulgation of Comment 8 to Model Rule 1.1. and RPC Rule 1.1 as adopted here in Pennsylvania.  Comment 8 (emphasis added) provides that “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.  To provide competent representation, a lawyer should be familiar with policies of the courts in which the lawyer practices, which include the Case Records Public Access Policy of the Unified Judicial System of Pennsylvania.”  Comment 8 strongly suggests that an attorney who does not investigate and understand the risks associated with the use of AI technology in his or her practice violates his or her duty to provide competent representation.  Just as we would not draft estate planning documents without inquiring about the client’s familial and financial situation, we should not be using AI programs and products without first making at least basic inquiries into how the AI program gathers and uses information that will then be incorporated into work product in the representation of our clients.

Rule 1.6 – Client Confidentiality

The use of open AI programs also raises concerns about client confidentiality, which is governed by RPC Rule 1.6.  That rule provides: “[a] lawyer shall not reveal information relating to representation of a client unless the client gives informed consent, except for disclosures that are impliedly authorized in order to carry out the representation, and except as stated in paragraphs (b) and (c).” 

Open AI programs such as ChatGPT explicitly gather and employ user inputs to “learn” and become more accurate in future responses.  These concerns are better addressed by closed AI systems such as Westlaw, LexisNexis and other products specifically designed by the legal industry.  It is critical that an attorney using any of these programs understands how user data is collected and stored.  It is highly recommended that no client information be input into an Open AI program.  Rather, the user should use codes or nicknames (e.g., “Jane Doe”) in place of any client. information.  Likewise, it is advisable to inform clients of the use of an AI system in delivering the attorney’s services and to obtain the client’s consent to the use of AI programs and the disclosure (if any) of their personally identifiable information during such use.

Rule 5.3 – Duty to Supervise Non-Lawyers

The Model Rules and the RPC also impose a duty upon attorneys to supervise others that are engaged in delivering legal services to our clients. Specifically, RPC Rule 5.3 provides that “[w]ith respect to a nonlawyer employed or retained by or associated with a lawyer: (a) a partner and a lawyer who individually or together with other lawyers possesses comparable managerial authority in a law firm shall make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that the person’s conduct is compatible with the professional obligations of the lawyer. ...”  Consequently, the attorney’s duty to understand the workings of any AI programs used in their practice is not just limited to their own use.  Rather, it extends to all of those who work under him or her in delivering the legal services to clients.  Accordingly, it is recommended that each firm or other law office develop a policy governing the use of AI by any personnel.  The policy must then be enforced on a regular basis to ensure that no one in the firm is violating the ethical duties and obligations discussed in this article.

Rule 5.5 – The Unauthorized Practice of Law

At least one state court has specifically ruled that the use of AI programs does not give rise to a claim that such use rises to the level of the unauthorized practice of law.  Nevertheless, given the wide disparity among the various states in how they define such unauthorized practice, it is still of some concern when considering AI programs. 

In Pennsylvania, RPC Rule 5.5 provides that: “[a] lawyer shall not practice law in a jurisdiction in violation of the regulation of the legal profession in that jurisdiction, or assist another in doing so.”  The RPC should be read in this context to reinforce the notion that the practice of law is still a human one.  Attorneys and other legal professionals cannot merely rely on an AI program to perform their jobs.  If utilized, the output from an AI program must be carefully scrutinized to ensure accuracy, reliability and suitability to the particular facts of the client for which it is being employed.  Thus, the use of predictive text AI programs does not differ significantly from the use of the many forms that we already employ on a regular basis to serve our clients.  Like those forms, AI programs are merely a tool to hopefully increase the speed and efficiency with which we deliver services to our clients.  A human touch is still necessary and desirable.


In summary, as with all technological innovations, in time the use of integrated AI programs in the practice of law will surely be routine.  Although there are clear pitfalls in using current AI products readily available to the public, including those in the legal profession, the increasingly rapid integration of AI technology into industry-specific products will hopefully alleviate some of those concerns.  In all events, attorneys must ensure that they and their staff understand and appreciate the limitations of AI technology as employed in their practices.  This will require more effort for some than others, but it is required of all.  Most importantly, we must recognize that AI technology is not capable of replacing us as attorneys.  It is merely another technological tool in our ever-evolving profession that requires us to continue to be good, or even great, lawyers!

William Hussey is a partner in Kleinbard’s Trusts & Estates Practice as well as a member in the Business & Finance Department. He focuses his practice on private client services, including advising businesses and their owners on business operations, transactions, and succession planning with a particular emphasis on federal, state, local, and international tax issues, and also counseling individuals and fiduciaries on estate and trust planning and administration.



bottom of page