The New Jersey Supreme Court has appointed a 31-member Committee on Artificial Intelligence and the Courts, consisting of judicial and government leaders, judges, attorneys, educators, and cybersecurity and technology experts, to examine the legal and ethical implications that artificial intelligence poses for court operations and the practice of law.
Chief Justice Stuart Rabner explained that “[a]rtificial intelligence is a tool that we are still learning about, and while it holds the potential for great opportunities, it can also create significant challenges within the legal community. This committee brings together leaders with different backgrounds and perspectives who can engage in a comprehensive review of the myriad issues this new technology presents for the courts,”
The Harvard Business Review continues its regular discussion of how "[g]enerative AI is evolving quickly, so the concrete steps businesses need to take will evolve over time. But sticking to a firm ethical framework can help organizations navigate this period of rapid transformation," said Kathy Baxter and Yoav Schlesinger in Managing the Risks of Generative AI (hbr.org) (June 6, 2023).
I spent last week at a conference with product manufacturers focusing on commingling data, attorney client privilege protections, eDiscovery opportunities, ethical implications, and trial uses for artificial intelligence. We discussed all of the advantages and disadvantages, but embraced the reality that artificial intelligence will have an amazing impact on our lives, professions, and the work we do inside and outside of the courtroom.
Now, today, the New Jersey Law Journal's Editorial Board published a Commentary entitled "Addressing the Abuses of Generative AI in the Legal Profession," which was composed by ChatGPT, and discussed the "misuse and abuse" of generative AI in the legal field. "As AI becomes increasingly integrated into the legal landscape, regulatory bodies and professional organizations must establish guidelines and standards for its responsible use."
The AI-generated editorial highlighted as a "significant concern" the "potential for AI-generated content to perpetuate bias and inequality." Indeed, "AI models are trained on data from the past, and if that data contains biases, the AI can inadvertently replicate and amplify them. In the legal context, this means that AI-generated legal documents or advice could perpetuate historical injustices, creating a legal system that is unfair and inequitable."
The editorial also emphasized that "there is a risk that the use of generative AI may diminish the critical human element in the practice of law. While AI can streamline processes and improve efficiency, it cannot replace the nuanced judgment, empathy, and ethical considerations that lawyers bring to their work. Overreliance on AI could erode the fundamental values that underpin the legal profession."
We will continue to analyze these ethical and practical issues, but it sounds like we all agree -- even Chat GPT -- on the need for reliable safeguards. Can't say it much better than this: "[W]ith great power comes great responsibility, and the legal community must grapple with the ethical and practical implications of AI."