By Robert G. Rasp
Presiding Judge
Workers’ Compensation Appeals Board
Los Angeles District Office, CA
Article printed with permission of Robert G. Rasp
Position paper presented at 2025 DWC CONFERENCE by Hon. Robert G. Rassp, Chairman of the Board of Directors, Friends Research Institute (friendsresearch.org) [Updated on 1-3-2025]
Disclaimers: The opinions expressed in this article are those of the author and are not those
of the State of California Department of Industrial Relations, Division of Workers’
Compensation, or the Workers’ Compensation Appeals Board. The opinions expressed
herein are based in part on the “Common Rule” 45 CFR 46 that pertains to the ethical
requirements in medical research and the protection of research participants. There is no
current legal requirement that 45 CFR 46 applies to injured workers whose claims may
involve the use of AI.
AI generated “picture of a woman with a parasol.” Thanks goes to Robin Kobayashi, Esq., my
editor at LexisNexis.
INTRODUCTION
Artificial Intelligence, or “AI,” is taking our society by storm. When computers first
became in wide use in business applications, advances in the programming language would
occur every five years or so with upgrades in software development that would cause users of
computers to replace old operating systems and download the latest operating system applicable
to either their Mac or IBM based computer. Today, software is being upgraded by software itself
by at least six versions of machine language. In fact, computer programmers can download
software applications that are bundled so that applications can easily be embedded in
sophisticated computer programs. Have you used a kiosk at McDonald’s? Or ordered a coffee
from Starbucks lately? Machines are now processing our orders at fast food joints thanks to the
sophisticated computer programming. You call a Call Center and you never speak to a human
being. You see the “Chat” icons for banks and other services with a web site? Those are run by
AI based software. If you want to speak to a human being, you usually have to keep repeating
“representative!” multiple times, or hit “0” repeatedly, and you might get lucky and get a live
person on the phone or in the chat.
Call centers for some companies are now voice activated and responses are via a
computer program upon verbal or numerical prompting by the calling party. Most of these
programs are driven by AI. AI is now affecting much of our daily lives even though we may not
even realize a response to something is driven by a computer program. Your physician interacts
with you by physician-patient portals that may be driven by AI via link to your medical records
and the physician’s electronic medical record notes. Did you know that the telehealth
appointment you had with your doctor was actually with an avatar while your real doctor was
golfing at his favorite course?
So how does AI fit in the context of medicine and law? This article was written by this
author as a result of prior notes he utilized for a presentation at the California Society of
Industrial Medicine and Surgery Conference that occurred on August 14, 2024 at the Loews
Coronado Island Resort. The title of the program was “Artificial Intelligence in Medicine and
Workers’ Compensation Law.” The panel consisted of this author (in the capacity of both a
workers’ compensation presiding judge and Chairman of the Board of Directors of Friends
Research Institute (friendsresearch.org), Dr. Christopher Brigham MD (editor of the AMA
Guides To The Evaluation of Permanent Impairment, 6th Ed. and principal of emedicine.com),
Ray Mieszaniec (COO of Evenup – a legal tech company), and defense attorney Negar Matian
(who is using AI applications in her workers’ compensation defense law practice).
Essentially the same panel is presenting the same subject at the 2025 DWC Conferences
at the Oakland Marriott in Oakland, California and the Los Angeles LAX Marriott in March
2025 with a couple of presenters who were not on the panel in Coronado.
This author’s presentation focuses on the author’s opinion that guardrails need to be
placed in the use of AI in the context of medicine and workers’ compensation litigation. While
there is no question that AI development companies have emerged to focus on specific industries,
including our own in workers’ compensation claims, a discussion of ethical considerations is
necessary as these applications are introduced into our everyday lives. This is especially true in
the context of workers’ compensation claims and the role of physicians including treating doctors
and medical-legal evaluators.
So how does the legal requirements for medical-legal reporting work if a physician
utilizes AI software to review medical records, to communicate with the injured worker, to write
reports that are admissible at the Workers’ Compensation Appeals Board? Can a defense attorney
rely on AI software to write a communication to the employer or claims examiner
recommendations for further case handling? Can defense counsel rely on AI to provide an
injured worker’s deposition summary or to develop questions to ask a physician at a deposition?
Can counsel delegate writing Points and Authorities, a legal brief, or a Petition for
Reconsideration to a generative artificial intelligence based software program? Can a workers’
compensation judge write a decision with the use of an AI program? These questions are all
relevant and everyone in the workers compensation system have or will be confronted by the
issue of how AI affects the way these cases are handled going forward. What is a legitimate role,
if any, of the use of AI in the context of workers’ compensation cases?
CHATGPT
Most of the public’s first exposure to AI occurred in November 2022 with the public
launch of ChatGPT which allowed anyone with a computer to seek information from an AI
platform. You type in a key word or words and the program would produce a litany of
information that the user can obtain from the program. Think in terms of a Google Search on
steroids. Sometimes the information would be “garbage in and garbage out” but more on that
issue below. Commercial use of AI became the goal of the software developers of AI – how can
AI be developed and marketed to assist specific industries in their use of computer based
intelligent information processing? The goal was and is to monetize the applications of artificial
intelligence to the public from how to apply in the logistics and warehouse industry, medicine,
transportation, legal, educational, and research. The potential use of AI is endless.
In fact, on October 28, 2024, Apple, Inc. introduced their iPhone 16 featuring what they
are calling “Apple Intelligence” which they advertise as:
- “[a] personal intelligence system that uses generative models and personal context to
provide relevant intelligence while protecting privacy. It’s a built-in feature of Apple’s
iOS 18, iPadOS 18, and macOS Sequoia. Apple intelligence offers generative AI tools
for writing and editing, image creation, and organization. It also includes writing tools,
summarized notifications, and the ability to search for things in photos and videos.”
What they are not telling us in this advertisement is that “Apple Intelligence” is nothing more
than CHAT-GPT.
AI IN MEDICINE IN WORKERS’ COMPENSATION CLAIMS
There are two aspects of artificial intelligence that exist in the practice of medicine from
an analytical standpoint, not including such things as robotic assisted surgical procedures or
other “hands-on” clinical practice. AI in medicine has two forms: (1) predictive analytics and
(2) generative AI. Predictive analytics involves such things as AI indicating that a patient has a
75% likelihood of being admitted into an intensive care unit. Generative AI is more prevalent in
the context of workers’ compensation related medical practice where for example, a computer
program using AI using a large language model writes an article. This author guarantees to you,
the reader, that this article was NOT generated by AI. Generative AI involves relationships
between people.
Further examples of Generative AI include patient-portal messages which can use
conversational interfaces for patients to learn about their diagnosis, treatment options, or prepare
for surgery (based on patient’s literacy level), or for patients to self-diagnose a condition. Can
Generative AI be used for a medical-legal physician to “write” a medical-legal report? Can a
medical-legal physician rely on a commercially available proprietary generative AI program to
review and summarize medical records? There are AI companies who are selling the commercial
use of their AI programs that claim, for example, that an accurate summary of 500 pages of prior
medical records for an injured worker takes 7 minutes for the AI program to generate. You are
reminded that medical records review of over 200 pages are billed by the medical-legal physician
at $3.00 per page pursuant to the medical-legal fee schedule under Title 8 Cal. Code of
Regulations Sections 9793(n) and 9795. Is an AI generated summary of medical records in a
litigated workers’ compensation case reliable, accurate, credible, and persuasive evidence of the
actual records?
AI IN LAW AND IN WORKERS’ COMPENSATION CLAIM
The use of AI in a workers’ compensation attorney’s law practice could include such
things as researching statutes, regulations, and case law. AI based programs could write a legal
brief, a legal article for a legal publication, or establish a best-worst case scenario for the
outcome of a claim. AI programs could summarize deposition transcripts of injured workers,
witnesses, or physicians. Can an AI application be used by a judge to write a Summary of
Evidence, an Opinion On Decision? A Report and Recommendation on a Petition for
Reconsideration or Removal?
The use of AI is already embedded in legal search engines that attorneys and judges use
every day. When counsel enters a word or phrase into the LexisNexis data base, an AI assisted
search engine can and will generate a list of statutes, regulations, and cases that may be pertinent
to the search. Are those search engines accurate? Are trial briefs, Points and Authorities, medical
or deposition summaries generated by an AI assisted search engine reliable, accurate, credible,
and persuasive? Is a judge’s decision or response to a Petition for Reconsideration or Removal
reliable, accurate, credible, and persuasive? Can an AI based program write a medical-legal
report including providing WPI ratings of an injured worker or write predictive apportionment
findings?
Here is an example of an advertisement for a Generative AI subscription that was
advertised online:
With the most robust set of capabilties (sic) in the market, “NAME OF AI PROGRAM” helps you:
- 1. Review Documents: Ask complex questions about a batch of documents and receive a substantive analysis complete with citations.
2. Search a Database: Pinpoint relevant documents within a large database of your files.
3. Draft Correspondence: Draft tailored letters and emails with speed.
4. Summarize: Condense long, complex documents into succinct summaries.
5. Extract Contract Data: Obtain precise information about the content of contracts.
6. Timeline: Automatically assemble chronologies of events described in your documents.
7. Contract Policy Compliance: Provide a set of policies to identify non-compliant contract language and receive automated redlines to bring the contracts into compliance.
8. Prepare for a Deposition: Easily identify pertinent topics and questions for investigative projects of all kinds.
Does this generative AI program replace law clerks, staff attorneys, paralegals, secretaries
and first-year attorneys? Do you trust a computer application to guide your legal analysis of
what may become a disputed issue? Where are the analytical skills about credibility or issue
spotting? Can this program identify legal or factual issues that only a practicing attorney can
determine? How do we know that if this generative AI program cannot find a legitimate legal
citation that it will invent a fictitious one instead? What is really irritating about this is that
speed is not necessarily quality, accuracy or reliability.
A generative AI program cannot replace an attorney’s gut feelings or ability to smell a rat
or to simply know what to ask in a deposition while on the fly during a deposition. Sometimes an
attorney’s instincts kick in and will establish a strategy just based on those instincts – which
generative AI cannot accomplish. Generative AI does not have human intuition, feelings or
empathy.
OVERLAPPING ETHICAL ISSUES
The use of artificial intelligence in the context of workers’ compensation litigation raises
significant ethical issues that need to be developed in order to keep pace with the usage of AI.
Since no formal ethical code of conduct exists in the use of AI in workers’ compensation
litigation, a discussion of some basic premises of ethics in medicine may apply.
The analysis of ethical considerations in the medical-legal context begins with the
Belmont Report in 1979 that was adopted by the federal government to apply to any federally
funded medical research that involved human participants for new drugs, biologics, or devices.
This broad ranging mandate was codified under 45 CFR 26 called the “Common Rule” which
applies throughout the United States and has been adopted in our own Health and Safety Code
[see Health and Safety Code Sections 24170-24179.5]. While ethical requirements in human
subject protections in medical research are mandated by law, no such mandate exists in use of AI
in legal or medical-legal applications.
Since there is no law that governs how AI can be used or restricted from use in workers’
compensation litigation, the legal protection of human subjects in medical research community
can be analogized to form a framework of protection against abuse of the use of AI in workers’
compensation claims. We are, after all, engaging in a form of social, medical, and legal research
just by using artificial intelligence in proposed ways during the course of a workers’
compensation claim. We do not have enough data or experience to draw any conclusions about
the short term or long-term effects on a claim or individuals involved in a claim when a party
uses AI in the prosecution or defense of a claim. As of today, there are no legal or ethical
guardrails in place to limit or regulate the use of AI in litigation. So how do we develop an
ethical framework for the use of AI outside of the medical research community? We use medical
research guardrails as a guide for the development of ethical usage of artificial intelligence in
both medicine and the law.
The Belmont Report and 45 CFR 46 have a tripartite mandate:
-
- (1) Respect for Person – treat people individually and account for individual variances,
perform research [or in our context – use artificial intelligence] in the best interest of
a patient.
(2) Beneficence: medical research must provide a benefit to society and improve
diagnostics and the treatment of disease [AI should be available to everyone for the
benefit to individuals and groups of individuals] (3) Justice: – apply the concept of equality in the selection of research participants [the
benefits of artificial intelligence should be distributed equally among populations and
individuals].
- (1) Respect for Person – treat people individually and account for individual variances,
In addition to the proposed basis for guardrails for the use of AI in medicine and law,
there is also the concept in medicine that medical processes follow FAVES: Fair, Appropriate,
Valid, Effective, and Safe. You are reminded that in the context of medical-legal evaluations in
workers’ compensation cases in California, Title 8 California Code of Regulations Sections 41
and 41.5 govern the ethical considerations for all physicians who perform medical-legal
evaluations. Someday there should be a provision in those sections that indicate that if any part
of the medical-legal process is performed with the assistance of an artificial intelligence resource
or program, a written disclosure statement shall be part of the physician’s reporting requirements.
POTENTIAL SHORTFALLS OF THE USE OF AI IN WORKERS’
COMPENSATION LITIGATION
There are a number of concerns about the use of artificial intelligence in the context of
any form of litigation, especially in workers’ compensation cases. For the use of AI in both law
and medicine, the FAVES factors should apply because AI can be misdirected to what is
financially favorable to the doctor or claims administrator and not of ultimate benefit to
legitimately injured workers. The use of AI by physicians and attorneys should be transparent,
explainable, and subject to inspection. Remember, no one can cross-examine a computer or a
computer program or application. How do you cross-examine a medical-legal physician who
uses AI to (1) establish a diagnosis, (2) causation of injury, (3) determine WPI ratings, or (4)
apportionment? An AI program cannot examine the injured worker can it? Will it some day?
Those of you who are not familiar with the mechanism of artificial intelligence, there are
some aspects of it that are very concerning. There are at least six machine languages that have
been developed that can allow artificial intelligence programs to write its own codes. Generative
AI can have a “hallucination” when it generates a false medical or legal citation. AI programs
can deteriorate or drift from when it was first introduced. In addition, AI could invent its own
data set that is not based on reality. This phenomenon is called “performance drift” and must be
monitored by human-based evaluation and oversight.
At the time of publication of this article, there is an organization called the “Coalition for
Health AI” (chai.org) which has developed what is called an “Assurance Standard Guide” that
divides oversight into three categories:
-
- (1) AI developer’s Responsibility – evaluate the AI model thoroughly before
deployment to ensure it meets safety and performance standards
(2) End-User’s Responsibility – conduct local evaluations to ensure the AI tool
fits the specific needs and conditions of the health system
(3) End-User’s Monitoring Responsibility – monitor AI tool performance over
time, ensuring it remains effective and adapting to any changes in conditions.
- (1) AI developer’s Responsibility – evaluate the AI model thoroughly before
The Coalition for Health AI is a public-private oversight organization involving
academia, tech companies, and the federal government to develop a national quality assurance
laboratory to evaluate the safety and effectiveness of AI in medicine (covering the concept of
beneficence). The idea is to prevent AI from making financial decisions in favor of payers rather
than decisions benefitting a patient (sounds like UR, doesn’t it?).
Remember, there is no legal mandate (legislative or regulatory) to require these guardrails
in the development or use of AI in medicine or in law. The promotors and supporters of the
Coalition include major, credible, medical groups including but not limited to UCLA Health,
Mayo Clinic, Google, Johns Hopkins Medicine, Boston’s Children’s Hospital, Kaiser
Permanente, UC Irvine, UC Davis, UC San Diego and others. The Coalition plans on monitoring
AI models use in medicine, developing best practice guidance for developing and deploying
health AI technologies on a use case by use case basis, and to publish an AI “report card” on an
accessible registry that has public access.
Is there a similar “Coalition for Law AI” that will do the same things as Coalition for
Health AI? Not yet – the only “oversight” of AI-based programs currently being marketed to
medical-legal physicians and attorneys is the market itself. Software developers are beginning to
saturate the market to sell AI based programs to medical-legal physicians, claims administrators,
and attorneys to help streamline the processing of information that is needed in the prosecution
or defense of workers’ compensation claims.
These include programs that summarize deposition testimony, provide predictive case
outcomes based on mechanism of injury and parts of body injured, set loss reserves, summarize
500 pages of medical records in 7 minutes, analyze a mechanism of injury, develop and send a
client the “attorney’s” recommendations for further case handling, managing a law practice,
answering emails or phone calls from clients.
This raises a serious point: How much inter-rater reliability is there for a summary of
medical records that is generated by an artificial intelligence program versus the medical-legal
physician actually doing the summary as well? We would like to see a side-by-side comparison
of an AI generated medical records summary with one that is actually done by a human QME or
AME. Would a 5% variation be acceptable? There are no studies yet on this issue. Further, who
does the claims administrator pay the $3.00 per page above 200 pages of records to be reviewed?
Doesn’t that alone raise some significant ethical issues for QMEs and AMEs who use artificial
intelligence programs to review and summarize medical records?
Artificial intelligence is currently embedded in MS Office (WORD especially) and now
in a LexisNexis search. All you have to do is type a word or phrase into the search engine and AI
will assist the user to obtain a data base. We already know that some AI based programs have
gone awry – a Federal judge in New York received an AI assisted legal brief from an attorney
who did not check the legal citations that were generated by the AI program. The judge did
check them and discovered that the citations were a figment of the AI program’s imagination –
the cited cases never existed. It did not take a computer program to generate sanctions against
the attorney who filed the AI generated brief.
Counsel is strongly advised to check their work.
ETHICAL CODE OF CONDUCT?
AI is creeping into our everyday lives. Artificial intelligence is becoming part of our
normal day to day lives. AI is being used even when you do not know it. Artificial intelligence
programmers can take the likeness of any person, say Taylor Swift for example, and generate
what is known now as a “deepfake” which generates her likeness in an AI generated image and
uses her voice to say anything the programmers want that sounds like her real voice. The
introduction of our AI seminar at the CSIMS conference in Coronado Island in August 2024 used
the likeness of Scarlett Johansson and her voice in a video that was developed using AI. The
image and sound were very real but the actual person and her voice were not.
So how would the Belmont Report of 1979 along with the protections of human research
participants apply in the context of the use of predictive analytics and generative AI in medicine
and law? Respect for persons: (1) there needs to be transparency on how patient data is being
used, (2) clarity of the role AI is being used in decision making, and (3) allowing regulators
access to the algorithms. Beneficence: A patient should be able to decline using AI as part of the
informed consent process. An injured worker should be told that the utilization review process
may be determined by AI but the injured worker will be provided reasonable treatment to cure or
relieve the effects of the injury that is based on the medical treatment utilization schedule in
ACOEM upon review by a licensed physician and/or a licensed physician through the Utilization
Review and Independent Medical Review processes of Labor Code Sections 4610, 4610.5 and
4610.6. Justice: any decision making process or review of a record by artificial intelligence is
subject to scrutiny by the Workers’ Compensation Appeals Board.
Here is an ethical issue: can a treating physician create an avatar who meets with the
patient electronically? Is a physician obligated to disclose to a patient that some of the
interactions between the patient and the doctor’s office is through an avatar or otherwise from an
artificial intelligence based application? Does a physician have to disclose that the probable
outcome of surgery is based on a predictive analytics algorithm from an AI program?
An AI based algorithm has to be “fair” one that provides the same treatment
recommendation for all patients with the same clinical features. Can AI undermine physicians’
or attorneys’ professional role as a fiduciary for a patient’s or client’s best interests? Ethical
considerations exist in both the medical and legal fields of practice. Attorneys are bound by the
Code of Professional Conduct [See Business and Professions Code Sections 6000 et. seq.] and
physicians are bound by their own professional standards and ethics. Specifically, Title 8 Cal.
Code of Regulations Sections 41 and 41.5 govern the ethical considerations for medical-legal
evaluators.
DISCLOSURE-DISCLOSURE-DISCLOSURE!
There is no formal code of conduct in medicine or in law as to the limitations by
practitioners of the use of applications programmed with artificial intelligence. There need to be
guardrails along the use of both predictive analytics and generative AI in medicine and law. We
need to look to the National Institutes of Health, the Centers for Disease Control and Prevention,
and the federal Office of Human Research Protections for guidance. Meanwhile, the California
Business and Professions Code or the Rules of Professional Conduct do not cover ethical
considerations for attorneys’ use of predictive analytics or generative AI in a law practice. There
has to be a movement to build public trust in the use of artificial intelligence in medicine and in
the courtroom. A lawyer, like a doctor, has a fiduciary duty to their client. There should be a
requirement that if a physician, an attorney or a judge writes anything using generative AI, the
physician, the attorney or the judge has to disclose its use and to attest to its authenticity and
accuracy.
After all, the attorney or physician owns what is written and has to defend its contents.
The missing element from written articles or reports that are generated by artificial intelligence is
the style or uniqueness of the writer’s prose. There is almost an innate ability to tell when
something was written by a machine and not by a person. All of us have a certain style of
writing and there is always a human touch to how it reads. This article for example has some
clunky word usage to it that are a product of this author’s unique writing style. The tone and
emotion of writing is missing from AI generated prose. You can tell it was not written by a
human. It just does not pass the smell test. But the AI-based applications will improve over time.
The narrative of the concept of disclosure is not new or foreign in the practice of
medicine or in the practice of law. Informed consent is the hallmark of any fiduciary relationship
between a patient and their physician or between a client and their attorney. If any part of a
workers’ compensation claim has been run through an artificial intelligence application by a
physician or injured worker’s attorney, the injured worker should have knowledge of that fact.
The metrics that are offered for claims administrators are limited as well – no one can predict the
outcome of a claim – not every lumbar spinal fusion surgery has the same outcome. Predictive AI
probably has very little use in the legal profession other than to give a claims examiner, risk
manager, or defense attorney a “best case” and “worse case” scenario that a good defense
attorney could already do just by reading the case file.
I SENSE DANGER, WILL ROBINSON!
Do you remember Robot in the television show, “Lost In Space?” So how far can a
medical-legal physician rely on a currently marketed application that is based on generative
artificial intelligence to write a medical-legal report? Can a physician utilize a program that uses
generative artificial intelligence to write a summary of 500 pages of medical and legal records?
What about our anti-ghost-writing statute?
Since this article is written about workers’ compensation claims and the use of predictive
analytics and generative AI within the workers’ compensation community, a direct quotation of
California Labor Code Section 4628 is appropriate. Labor Code Section 4628 is the “ghostwriting” prohibition that says the medical-legal physician writes and signs the report and must
disclose who else contributed to the medical-legal evaluation process and report writing process.
Here is Labor Code Section 4628 in its entirety:
-
- 4628(a) Except as provided in subdivision (c), no person, other than the
physician who signs the medical-legal report, except a nurse performing
those functions routinely performed by a nurse, such as taking blood
pressure, shall examine the injured employee or participate in the nonclerical preparation of the report, including all of the following: - (1) Taking a complete history.
- (2) Reviewing and summarizing prior medical records.
- (3) Composing and drafting the conclusions of the report.
- (b) The report shall disclose the date when and location where the
evaluation was performed; that the physician or physicians signing the
report actually performed the evaluation; whether the evaluation
performed and the time spent performing the evaluation was in
compliance with the guidelines established by the administrative
director pursuant to paragraph (5) of subdivision (j) of Section
139.2 or Section 5307.6 and shall disclose the name and qualifications of
each person who performed any services in connection with the report,
including diagnostic studies, other than its clerical preparation. If the
report discloses that the evaluation performed or the time spent
performing the evaluation was not in compliance with the guidelines
established by the administrative director, the report shall explain, in
detail, any variance and the reason or reasons therefor. - (c) If the initial outline of a patient’s history or excerpting of prior
medical records is not done by the physician, the physician shall review
the excerpts and the entire outline and shall make additional inquiries
and examinations as are necessary and appropriate to identify and
determine the relevant medical issues. - (d) No amount may be charged in excess of the direct charges for the
physician’s professional services and the reasonable costs of laboratory
examinations, diagnostic studies, and other medical tests, and
reasonable costs of clerical expense necessary to producing the report.
Direct charges for the physician’s professional services shall include
reasonable overhead expense. - (e) Failure to comply with the requirements of this section shall make
the report inadmissible as evidence and shall eliminate any liability for
payment of any medical-legal expense incurred in connection with the
report. - (f) Knowing failure to comply with the requirements of this section shall
subject the physician to a civil penalty of up to one thousand dollars
($1,000) for each violation to be assessed by a workers’ compensation
judge or the appeals board. All civil penalties collected under this
section shall be deposited in the Workers’ Compensation Administration
Revolving Fund. - (g) A physician who is assessed a civil penalty under this section may be
terminated, suspended, or placed on probation as a qualified medical
evaluator pursuant to subdivisions (k) and (l) of Section 139.2. - (h) Knowing failure to comply with the requirements of this section
shall subject the physician to contempt pursuant to the judicial powers
vested in the appeals board. - (i) Any person billing for medical-legal evaluations, diagnostic
procedures, or diagnostic services performed by persons other than
those employed by the reporting physician or physicians, or a medical
corporation owned by the reporting physician or physicians shall
specify the amount paid or to be paid to those persons for the
evaluations, procedures, or services. This subdivision shall not apply to
any procedure or service defined or valued pursuant to Section 5307.1. - (j) The report shall contain a declaration by the physician signing the
report, under penalty of perjury, stating:
“I declare under penalty of perjury that the information contained in
this report and its attachments, if any, is true and correct to the best of
my knowledge and belief, except as to information that I have indicated
I received from others. As to that information, I declare under penalty
of perjury that the information accurately describes the information
provided to me and, except as noted herein, that I believe it to be true.”
The foregoing declaration shall be dated and signed by the reporting
physician and shall indicate the county wherein it was signed. - (k) The physician shall provide a curriculum vitae upon request by a
party and include a statement concerning the percent of the physician’s
total practice time that is annually devoted to medical treatment.
- 4628(a) Except as provided in subdivision (c), no person, other than the
CONCLUSION – FOR LAWYERS AND JUDGES
There must be a movement to build public trust in the use of AI in medicine and in the
courtroom. A lawyer, like a doctor, has a fiduciary duty to their client. There should be a
requirement that if an attorney or a judge writes anything using AI, the attorney or judge has to
disclose its use. For goodness sakes, check your work! Double check the citations that are
generated by the software and read the actual cases to verify the authority you are citing. No one
can cross-examine a computer or its programming.
CONCLUSION – FOR MEDICAL-LEGAL PHYSICIANS
Is Labor Code Section 4628 a full stop for medical-legal physicians to use generative AI
in their report writing process? Can a medical-legal physician use AI to summarize medical
records? Could a judge disallow payment and deem a medical-legal report inadmissible because
the evaluating physician was assisted by AI in the generation of the report? Regulations and case
law may be necessary to answer these questions. In the meantime, we can look forward to some
ethical considerations within the medical, medical-legal, and legal communities in the use of
predictive analytics and generative AI since artificial intelligence in general is rapidly becoming
part of our daily lives as human beings.
CONCLUSION – THE ULTIMATE GUARDRAILS FOR INJURED
WORKERS
Is there potential civil liability of the owners and developers of proprietary artificial
intelligence software that generates a deepfake image of an injured worker, their attorney, or a
proprietary generative AI program that has an inaccurate medical record summary or claim
analysis that a QME, AME, employer, or claims examiner relies on? The ultimate guardrail
against harm by a software company who sells artificial intelligence programs to participants in a
workers’ compensation claim is a civil lawsuit against the AI developers in Superior Court for
damages in addition to costs, sanctions and attorney’s fees in the workers’ compensation case at
the WCAB against am applicant or defendant who misuses AI.
The ultimate responsibility of anyone who utilizes any form of artificial
intelligence in the course of a workers’ compensation case is full disclosure by the person or
persons who utilize AI during any step along the claims process. There needs to be regulations,
industry standards, or other required ethical considerations that any use of AI by any person
involved in a workers compensation case be fully disclosed to any affected participant in that
case. Generative and predictive analytics by artificial intelligence does not have a human touch.
No one knows what software was written by a human and what was written by a machine.
In addition, there should be required written disclosure that AI was utilized and how it
was utilized with some form of assurance that a human being reviewed information that was
generated by an AI program before any substantive decision making was made by a human being
concerning all aspects of a claim. There is absolutely no room for deception in the course of a
workers’ compensation claim since every judge has a duty to decide the rights and obligations of
parties based on the evidence admitted at trial. That evidence has to be valid, reliable, accurate,
credible, and persuasive. A computer software system that uses artificial intelligence cannot
make those determinations for us. There must be a human touch from claim form to claim
resolution.
Postscript: The author of this article wants to acknowledge the essay “The Ethics of
Relational AI – Expanding and Implementing The Belmont Principles” by Ida Sim M.D. Ph.D.
and Christine Cassel MD., New England Journal of Medicine, 391:3, July 18, 2024, pp. 193-196.
© 2024 Robert G. Rassp, all rights reserved.