The utility of AI’s role in the legal industry has yet to be determined.
Recently, two New York attorneys were sanctioned after using and relying on ChatGPT for legal research (Mata v Avianca, Inc., No. 22-cv-1461-PKC, --- F. Supp. 3d ---, 2023 WL 4114965 (S.D.N.Y. June 22, 2023)). Attorney Steven Schwartz commenced an action in Supreme Court, New York County, on behalf of a client of his firm, which was removed to the Southern District of New York. Another attorney at Schwartz’s firm, Peter LoDuca, then took over the case. LoDuca filed opposition papers in response to the defendant’s motion to dismiss, arguing why the action was not time-barred. Schwartz conducted the purported legal research for the opposition papers. Schwartz explained that initially, he thought he had limited access to federal cases on Fastcase, and ultimately resorted to ChatGPT, which he believed was “ ‘a super search engine.’ ” The defendant’s reply papers pointed out that cases cited to in the plaintiff’s opposition papers did not exist or stood for something other than the propositions for which they were cited. The Court responded by ordering LoDuca to file an affidavit with annexed copies of number of purported cases. After receiving the affidavit, the Court determined that the plaintiff’s opposition papers cited cases that did not exist.
The Court determined that Schwartz and LoDuca acted in “bad faith.” The Court explained that “the record [ ] would look quite different” had the attorneys informed the Court of their actions earlier, instead of “doubl[ing] down.” Schwartz, LoDuca and their firm were sanctioned as follows: (1) the imposition of a $5,000 penalty, to be jointly and severally paid; (2) each were ordered to mail letters to the client identifying and attaching the Opinion and Order and relevant transcripts; and (3) each had to notify each judge named in the specific nonexistent cases, identifying and attaching the Opinion and Order, relevant transcripts, and the “fake ‘opinion’ attributed to the recipient judge.”
Courts are also starting to implement rules to ensure responsible use of AI, or to prohibit the use of AI. For example, Northern District of Texas District Court Judge Brantley Starr requires litigants to “file . . . a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence . . . or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human” (Mandatory Certification Regarding Generative Artificial Intelligence). Similarly, Magistrate Judge Gabriel A. Fuentes of Northern District of Illinois issued a standing order requiring “[a]ny party using any generative AI tool to conduct legal research or to draft documents for filing with the Court must disclose in the filing that AI was used, with the disclosure including the specific AI tool and the manner in which it was used” (Standing Order for Civil Cases Before Magistrate Judge Fuentes (May 31, 2023)). Judge Stephen Vaden of the United States Court of International Trade focused on the issue of confidentiality, requiring parties that use generative AI to submit “(1) [a] disclosure notice that identifies the program used and the specific portions of text that have been so drafted; [and] (2) [a] certification that the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party” (Order of Artificial Intelligence (June 8, 2023)).
The use of AI is at its infancy in the legal profession. It is evident that AI could increase efficiency in the profession in some ways, but cause dilemmas in others, i.e., legal research. Nevertheless, it is already clear that attorneys who choose to use ChatGPT and similar applications must do so with caution. For good reason, courts are not convinced of its reliability. We expect that as the legal profession continues to grapple with how AI will or will not be used in litigation and other areas of practice, more lessons will be learned. We are watching these developments closely.
For more information on this decision or if you have any related questions, contact Nicole Macris or the Bond attorney with whom you are in regular contact.
The utility of AI’s role in the legal industry has yet to be determined.
Recently, two New York attorneys were sanctioned after using and relying on ChatGPT for legal research (Mata v Avianca, Inc., No. 22-cv-1461-PKC, --- F. Supp. 3d ---, 2023 WL 4114965 (S.D.N.Y. June 22, 2023)). Attorney Steven Schwartz commenced an action in Supreme Court, New York County, on behalf of a client of his firm, which was removed to the Southern District of New York. Another attorney at Schwartz’s firm, Peter LoDuca, then took over the case. LoDuca filed opposition papers in response to the defendant’s motion to dismiss, arguing why the action was not time-barred. Schwartz conducted the purported legal research for the opposition papers. Schwartz explained that initially, he thought he had limited access to federal cases on Fastcase, and ultimately resorted to ChatGPT, which he believed was “ ‘a super search engine.’ ” The defendant’s reply papers pointed out that cases cited to in the plaintiff’s opposition papers did not exist or stood for something other than the propositions for which they were cited. The Court responded by ordering LoDuca to file an affidavit with annexed copies of number of purported cases. After receiving the affidavit, the Court determined that the plaintiff’s opposition papers cited cases that did not exist.
The Court determined that Schwartz and LoDuca acted in “bad faith.” The Court explained that “the record [ ] would look quite different” had the attorneys informed the Court of their actions earlier, instead of “doubl[ing] down.” Schwartz, LoDuca and their firm were sanctioned as follows: (1) the imposition of a $5,000 penalty, to be jointly and severally paid; (2) each were ordered to mail letters to the client identifying and attaching the Opinion and Order and relevant transcripts; and (3) each had to notify each judge named in the specific nonexistent cases, identifying and attaching the Opinion and Order, relevant transcripts, and the “fake ‘opinion’ attributed to the recipient judge.”
Courts are also starting to implement rules to ensure responsible use of AI, or to prohibit the use of AI. For example, Northern District of Texas District Court Judge Brantley Starr requires litigants to “file . . . a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence . . . or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human” (Mandatory Certification Regarding Generative Artificial Intelligence). Similarly, Magistrate Judge Gabriel A. Fuentes of Northern District of Illinois issued a standing order requiring “[a]ny party using any generative AI tool to conduct legal research or to draft documents for filing with the Court must disclose in the filing that AI was used, with the disclosure including the specific AI tool and the manner in which it was used” (Standing Order for Civil Cases Before Magistrate Judge Fuentes (May 31, 2023)). Judge Stephen Vaden of the United States Court of International Trade focused on the issue of confidentiality, requiring parties that use generative AI to submit “(1) [a] disclosure notice that identifies the program used and the specific portions of text that have been so drafted; [and] (2) [a] certification that the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party” (Order of Artificial Intelligence (June 8, 2023)).
The use of AI is at its infancy in the legal profession. It is evident that AI could increase efficiency in the profession in some ways, but cause dilemmas in others, i.e., legal research. Nevertheless, it is already clear that attorneys who choose to use ChatGPT and similar applications must do so with caution. For good reason, courts are not convinced of its reliability. We expect that as the legal profession continues to grapple with how AI will or will not be used in litigation and other areas of practice, more lessons will be learned. We are watching these developments closely.
For more information on this decision or if you have any related questions, contact Nicole Macris or the Bond attorney with whom you are in regular contact.