This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking. But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do. I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account. |
A Perspective on the Ethics of Clinical Medical
Research on Human Subjects The following is the essential initial line of
reasoning involving the general ethics of clinical medical research involving
experimental interventions on human subjects. While some of this paper may have
application to more general research involving human beings, its focus is what
is normally called “therapeutic research”, and by that we mean simply research
involving interventions that test experimental therapies, particularly in
regard to phase II and phase III types of studies that seek to find out whether
an intervention is safe and effective. It is not about phase I research, nor
about non-therapeutic research, such as participation in a retrospective study
or a survey.
Proposition 1)
Medical progress
requires medical research.
Proposition 2)
Medical progress
benefits future patients with the condition for which progress occurs.
Proposition 3)
Medical research sometimes requires
experimentation with newly tried treatments (i.e., experimental treatments) for
human subjects.
Proposition 4)
Experimental
treatments can be dangerous and result in serious harm or death.
Proposition 5)
Benefitting
patients is the justification for conducting research that is risky or
potentially dangerous.
Proposition 6)
Therefore (from
1, 3, and 4) medical research and medical progress can sometimes be risky and
result in serious harm or death.
Proposition 7)
àBecause of the risks involved, medical experimentation with human
subjects requires the most rigorous scientific validity in order to be ethical,
because to put people at risk for no good reason (i.e., to put people at risk
in order to obtain results that are not necessarily accurate or meaningful)
would be morally wrong.
Proposition 8)
There are other
conditions besides scientific invalidity that make putting innocent people at
risk morally unjustifiable also, such as doing so without their consent, doing
so in ways that the likelihood or quality of the potential benefit is not
commensurate with the risk or magnitude of the potential harm for the subject.
Proposition 9)
àTherefore (from 6, 7, and 8), the most rigorous scientific validity is
a necessary, but not sufficient, condition in order for experimentation with
human subjects to be ethical. Proposition 10)
àThe most rigorous scientific validity requires using random, double-blind,
placebo-control trials in medical research. Proposition 11)
àScientifically rigorous and valid random control trials require strict
adherence to treatments that cannot be individualized. Proposition 12)
Some subjects in
random control trials receive placebo treatment or experimental treatment that
is not tailored to their individual needs. Proposition 13)
Hence, (from 12)
some subjects in random control trials do not receive the best treatment for
their conditions. Proposition 14)
àIt is unethical for a physician not to use the best treatment available
for a patient and it is unethical for a physician to recommend to a patient
that the patient participate in a program of treatment, such as a research
program, in which the patient will not receive, and/or will not be able to
receive, the best treatment available, and which may also be dangerous,
harmful, or fatal for the patient. Proposition 15)
àHence (from 13 and 14), it is unethical for physicians to utilize
patients as research subjects or to recommend they participate in medical
research. Moreover, Proposition 16)
It would be
morally wrong to harm or put people at risk against their will or in a manner
that would be against their will.[1],
[2] Proposition 17)
Thus, utilizing people
as research subjects in ways against their will or in a manner that would be
against their will, would be morally wrong. Proposition 18)
People who give
valid consent to participation in a research project are not being utilized against their will or in a manner that would
be against their will. Proposition 19)
Hence (from 17
and 18), valid consent to participate in a research program is a sufficient
condition for not utilizing (or exploiting) people in medical experiments
against their will or in a manner that would be against their will. And not utilizing people against their will
or in a manner that would be against their will is a necessary but not sufficient
condition for conducting research on human subjects in a morally right way. (Note: this
does not logically imply that valid consent is sufficient for conducting
research on those who give it. And this
also does not logically imply that valid consent is necessary for treating
subjects experimentally either[3]. For something to be a sufficient condition to
meet a necessary condition is just to say it is one way to meet a requirement
that itself is not sufficient. E.g., to
obtain a college undergraduate degree, it is often sufficient to take an art
history course to meet a humanities requirement, and it is often necessary to
meet a humanities requirement to obtain an undergraduate degree, but that does
not make taking an art history course either necessary or sufficient to obtain
a college undergraduate degree.) Proposition 20)
If patients, no
matter what they are told, still mistakenly believe that they will definitely receive
treatment instead of a placebo, and if they mistakenly believe there is minimal
or no risk to being a research subject, or that the potential benefits are
commensurate with the risk, then their informed consent is not valid consent or
is not rational consent. Proposition 21)
According to
studies, many research subjects mistakenly believe that they will definitely
receive treatment instead of a placebo and they mistakenly believe there is no
or minimal risk to receiving the experimental treatment, or they believe the
potential benefits are commensurate with the risk. Proposition 22)
Hence, many
patients should not be research subjects (i.e., research on experimental
treatments should not be conducted on these patients/subjects). There is evidence against the premises preceded by
arrows: propositions 7, 9, 10, 11, 14, and 15.
We will argue that those statements, as they stand, are indeed false,
and that some of the arguments to overcome the objections to them will not
suffice. In some cases, however, those statements can be replaced by true
statements which do not cause as many problems.
We will also argue that some of the arguments meant to
refute the above arguments or false propositions are unsound or miss the point.
In some cases other arguments will serve that purpose better. Problem of Placebo Controls It should be fairly clear from the above that placebo
controls and the potential dangers of experimental treatments cause two
different kinds of problems, and are in some ways the lynchpin of two
significant ethical concerns in conducting research on human subjects – 1)
causing actual harm to (or at least not helping benefit) research subjects or
2) exploiting people by conducting research on them that subjects them to the
risk of harm (or lack of benefit in the case of placebo), even when harm does
not actually occur. Consider first the alleged problem of using
placebo-controls: “Randomized double-blind placebo-controlled
trials have been argued to provide the strongest test of efficacy and, as such,
are important tools for advancing the evidence base supporting [...]
treatment. However, such trials present
difficult ethical issues, because one group, by definition, receives no
treatment for the condition being studied.”[i]
This may be put in the form of the dilemma that if one
uses placebos in research, one will not be “treating” (i.e., administering to) patients
(i.e., benefitting them through actual treatment) as one should be administering
to them; and if one does not use placebos in research, one will not be doing
proper research. Thus, one must choose
between taking care of patients properly and doing research properly.[4] This
dilemma is often characterized as the dilemma between a utilitarian
consequentialist approach to ethics (the obligation to do what will bring about
the greatest balance of good for the greatest number of people) and a
“deontological” approach to ethics that says there are some things more
important than just bringing about the greatest balance of good for everyone –
that, for example, people should be treated with respect and not “used” at
their expense to gain benefits for others, particularly they should not be used
by those who have taken an oath to place their health and well-being first.[5] Consider
also a parallel dilemma involving the risks and dangers for the subject of
experimental research: If one harms human subjects in research, one will be
treating them maleficently and abrogating the obligation to “do no harm”thus
not treating people as one should be treating them; and if one does not test
treatments – even if the treatments might or do cause harm -- one will not be
doing proper research. Thus, one must
choose between taking care of patients properly and doing research properly. Taken
together, these two dilemmas are expressed in proposition 14 (and consequently
in 15 also) above. While
there are many fine articles that have contributed to the discussion of individual
elements of the arguments we are considering – articles which make excellent,
insightful, reasonable, and sometimes passionately eloquent points -- this
paper is intended to offer a more systematic and, in a sense fundamental,
philosophical analysis of the underlying issues involved, showing their
relationship to each other and to the overall topic. Furthermore,
the authors will contend that there can be some valid controlled studies that
satisfy the underlying logic of
randomized double-blind, placebo-control or “no treatment” control trials [RCT]
even without withholding treatment
from anyone. With
this as background, we would now like to examine the initial line of reasoning
presented at the beginning of this paper and those answers and ensuing
arguments generally given in response to it. The conventionally accepted response to the dilemma is
to deny the truth of proposition 14 (and subsequently proposition 15) by invoking
the concept of equipoise, which comes
in two forms, with the distinction and significance between them to be
explained shortly: theoretical equipoise
and clinical equipoise. The basic
idea of equipoise is that it is not a breach of the duties of beneficence and
non-maleficence if, without professional negligence, one truly does not know
which act will bring about the greatest benefit or will cause more harm than
good for a subject. “Equipoise” is the word that describes a non-negligent, lack
of sufficient knowledge/evidence about which treatment option, if any, is best
for the patient or subject, and therefore absolves the physician or researcher
for conducting research on subjects or patients, and absolves a physician for recommending
to patients that they join a research trial or for condoning a patient’s
joining such a trial, even if the patient receives no treatment or a placebo
(and thus is not given a real treatment) and is not benefitted by medical care
as such, and even if the patient is harmed by the treatment. What the principle of equipoise in research ethics does is
to allow research to be conducted on proposed (or existing) therapies when the
result is uncertain and may actually benefit the subject who has the condition
for which the therapy is intended or used.
This is because the researcher (whether physician or not) then does not
already know the answer to the question as to which treatment (or whether
non-treatment) is best for the patient or for similar patients and therefore is
not intentionally (or negligently) harming or intentionally (or negligently)
refraining from helping the subject by placing the subject in one arm of the
study rather than another. And the
physician is not intentionally harming or refraining from helping the subject
by encouraging or allowing the subject to participate in the study. In propositions 14 and, the term “unethical” and its implied
antonym, “ethical” are ambiguous -- in some cases referring to what is right or
wrong, and in other cases referring to what is morally responsible,
conscientious, and reasonable, or what is morally irresponsible and culpable. What
turns out to be wrong, for example, given full information (which normally
appears only in hindsight or after the fact), is not always morally culpable or
unreasonable. Putting or allowing a patient to be in a research project’s
control arm, where the experimental arm yields benefits, is wrong (in the sense
of outcome) but is not culpable or blameworthy since one could not know the
outcome prior to the placement. Similarly,
but opposite, when a treatment turns out to be harmful, the right thing to do
would have been to put one’s patients in the control group instead of the
treatment group, even though the above dilemmas might have made that seem to be
a violation of the Hippocratic oath, when, prior to the trial, the experimental
arm seemed to be the one most likely beneficial. While ethical principles seek to accomplish what is
actually right, they cannot hold one culpable for non-negligent ignorance that
leads to err. While utilitarianism, for example, is the principle of doing the
greatest good (i.e., bringing about the greatest benefit) for the greatest
number, one is not normally morally culpable for falling short of that if one
in fact does the act which all available evidence would lead any rational and
knowledgeable person to believe would bring about the greatest good for the
greatest number, but which ends up not doing so. Unfortunately, since people are not
omniscient, and must instead use logic based on reasonably best available
evidence, the intended goal, is not always achieved. The most reasonable act based on all
available evidence is not always the act that turns out to be right. This is not just a problem for utilitarianism
or other “consequentialist theories” (i.e., theories of right that involve
reference to the best actual results or consequences), but it applies to all
ethical theories, since what most rationally seems, for example, to be treating
a person with respect may not be what actually respects that person. E.g., if you visit someone in the hospital
who is seriously or terminally ill, that may not be what they want, even if it
might be what most people might want. Or
arguing with most people makes them feel you are not respecting them, whereas
many people feel that arguing with them shows them you respect their opinions
enough to challenge them and that you respect the person enough to want to show
him/her what is best for them, even though that takes effort on your part and
even though you risk being disliked by them for being thought to be arrogant,
controlling, or argumentative. It might be thought that given we can at best only be
accountable for being non-negligently rational, that principles such as
utilitarianism or the Hippocratic Oath should be worded as: “one should do what
one rationally and non-negligently believes is best for all or for the
patient.” Or that “one should do what
one rationally and non-negligently believes shows the most respect for other
people.” And while it is true in one
sense that those are things one “should” do, it is also the case that doing
that will not necessarily be the right thing to do – will not turn out to be
correct. Ethical principles and obligations need to refer to what is
objectively right, not just to what is rational, or otherwise everyone who is
rational and who has best intentions would be automatically right even when
what they do turns out to actually have terrible consequences or override
someone else’s rights by mistake. It is
important linguistically and conceptually that we keep the distinction between
what is right and what is rational, so that when the two do not coincide, we
have a way of pointing that out. Ethical
principles need to refer to what is actually right, but moral character
involves non-negligently doing what all available evidence would rationally say
that is. So there is a distinction
between being right and being reasonable.
People should always (try to) do what is right, but they can only be
held accountable for being (non-negligently) reasonable in the attempt. In this particular context of the above
dilemmas, being “non-negligent” includes being current in the professional
knowledge one should have. Applying that idea to the concept of equipoise, as it is
used to try to justify research on human beings, when such research may include
no (beneficent) treatment (i.e., placebo) or actual harm, the idea is that if a
physician or researcher does his/her best in trying to help an ill or injured
patient while also seeking scientific knowledge, the physician or researcher,
as long as s/he is reasonable and non-negligently knowledgeable is not morally
culpable even if the subject turns out in actuality not to be helped, or even
to be harmed. Hence, as long as a competent
physician is not conducting or recommending a research study that will
knowingly likely harm the patient as subject, s/he is not violating his/her
oath or any reasonable ethical principle about treating people decently.[6] However, since Benjamin Freedman’s contribution to this
subject (referenced below), “equipoise” relevant to research appears in two
forms, 1) “theoretical” or individual equipoise, whereby the individual
researcher does not already know what is in the subject’s best interest, and 2)
“clinical” or professional equipoise, whereby the profession “as a whole” (however
that might reasonably be understood or determined to mean) does not know, or
cannot agree, on what is the best course of action for the subject with the
condition. Since Freedman introduced the
distinction, clinical equipoise is generally considered the important form
necessary to justify research on human subjects, in order to prevent justifying
research because of the mere hunches or beliefs of an individual researcher
when that seems to ignore the professionally received view based on prior
research evidence. It is the
profession’s collective uncertainty about what is considered best treatment,
not the individual’s uncertainty that is considered to be important.[7]. We contend however, that each of these kinds of equipoise is
important for a different reason, but each is necessary in order to justify
doing research, because it would be wrong to subject people to harm or lack of
help in order merely to formally demonstrate what is already clearly known. Clinical equipoise (i.e., equipoise within
the profession as a whole) is necessary in order to morally justify a particular
research study at all, regardless of who conducts it. And theoretical (i.e., individual) equipoise
is necessary in order to allow a particular researcher to do a trial. Without the requirement for clinical
equipoise, anyone could conduct research just by not knowing what s/he should
professionally. Clinical equipoise is a
safeguard against irrational research by individuals who do not have the prior
knowledge they should. On the other
hand, personal or theoretical equipoise is necessary for the moral character or
moral nature of the individual researcher, who should not be conducting
research s/he believes will be unnecessarily harmful to (or immorally
neglectful and unhelpful) to the subjects in the study even if colleagues or
the profession as a whole, or on balance, think the research is justified. But one misunderstanding of theoretical equipoise is that
it requires absolute or total uncertainty about which arm of a research study
will best serve the subjects who have the condition for which a therapy is
being studied.[8] We contend it does not. We maintain that all that is required is some
form of reasonable doubt or uncertainty about whether a treatment will be
harmful or not and whether it will be helpful or not. A physician or researcher
may have a belief about the safety and efficacy of treatment, whether standard
or experimental, and yet still have sufficient doubt about the accuracy of that
belief to warrant not considering it to be “knowledge” or “proven”. One can believe something and yet be aware
the belief does not rise to the level of certainty or knowledge, or even much
probability. That should be sufficient
equipoise to justify a research study.
When sufficient knowledge tips the balance to warrant believing with
certainty that a particular treatment option (among those being considered) is
or is not best for patients, then research using those options is not
legitimate. This is sometimes referred
to as reaching a point where equipoise is “disturbed”; and research is only
said to be legitimate when equipoise exists or is “undisturbed.” These mean that
the research is justified because the safety and efficacy of the treatment
under study is not yet sufficiently known (i.e., “equipoise is undisturbed”, so
there is still uncertainty about it), or that the research is unjustified
because the treatment proposed or under study has been already shown to be safe
and effective or because it has been already discredited either because it has
been sufficiently demonstrated to be ineffective, dangerous, or is not worth
the troubling side-effects even if they are not dangerous (i.e., “equipoise is
disturbed” because there is now certainty about the treatment’s safety and
effectiveness, or its lack thereof). The same elements, whatever they might be, for determining
whether sufficient equipoise exists to begin or conduct a research study, also
determine the proper “endpoint” of a trial, the point at which there is
sufficient evidence to end or discontinue the trial because there is no longer
sufficient doubt whether the treatment is safe or effective. More will be said
about this later. The two different kinds of equipoise,
however, are morally necessary, but for two different purposes. Clinical equipoise (uncertainty within the
profession as to which arm of a trial will be best for subjects) is necessary
in order to justify the research project being conducted at all, no matter who
does it; and theoretical equipoise (uncertainty about the same thing by an
individual researcher, or by a physician recommending a patient participate in
a research trial) is necessary to justify the particular researcher’s
conducting the study and to justify any physician’s allowing or recommending a
patient’s participation in it. While
lack of clinical equipoise precludes anyone’s doing the project, an individual
physician may be obligated not to conduct that study or advise or condone his
patient’s being in it, because s/he disagrees with the general professional
opinion that there is insufficient evidence to warrant the trial. While colleagues may not believe there is
sufficient evidence that a trial will harm or not benefit a patient as much as
a different treatment, the physician who does would be morally inconsistent
(perhaps approaching dereliction of duty) to conduct such a study and/or
condone his/her patients’ participation in the study if someone else conducts
it. The Significance of the Duty to Minimize
Unnecessary While clinical and theoretical equipoise
are both necessary to justify research involving human subjects, they are not
sufficient. As pointed out in proposition 17, it would
be morally wrong to utilize people in research studies against their will or
that would be against their will. It is also normally morally necessary that
a proposed treatment have sufficient evidential and/or theoretical basis for
making it reasonable to believe it will be successful, before it is attempted
on human beings. Normally one doesn’t morally just “try anything” on people to
see what it does, though there may be some last ditch desperate efforts when otherwise
all is lost anyway, that it is not unreasonable to at least try something, no
matter how unlikely it might be. And it
is normally morally necessary for the likely risk/benefit ratio – in terms of
probability and magnitude, and in terms of significance for the patient[9] – to be reasonable. But we want to discuss here a different requirement
necessary to morally justify research on human beings – the moral obligation
not to subject them to unnecessary risk, where “risk” refers both to the risk
of harm from the treatment and also the risk of harm (or unnecessary worsening
of their condition) from withholding
treatment (as in placebo-controls or “no-treatment” controls). In particular, we want to discuss the
necessity of minimizing risk from not-treating a research subject, by
withholding treatment in the form of giving no treatment or in the form of
administering a placebo. (There is a difference because “placebo effects” are
effects, though their cause is not always, or perhaps ever, understood or known.[10]) Much has been written in the controversy
about use of placebo controls, particularly in regard to the controversy of
their (moral) appropriateness as a control instead of standard treatment, and
we do not wish to try to summarize or revisit all of it. Instead we wish to put it in a broader
context. First, [for later reference, we designate
this as “Paragraph A”:] it is generally, though not universally, recognized
that placebo controls in place of treatments which are known to prevent death
or serious, particularly irreversible morbidity are not morally legitimate, at
least not unless a case can be made for altruistic martyrdom in the name
of science or unless there is an
instance where the subjects will die very soon anyway from some other cause and
dying from lack of effective treatment will not be any worse for the subject
than the way s/he will otherwise die. We
will return to this point because it is a specific corollary of a more general
principle. Second, we wish to deny proposition 10 at
the beginning of this essay and point out that random control trials (RCTs),
while extremely significant, are not the only way or necessarily, in certain
circumstances, even the best way to gain scientific or medical knowledge. RCTs have certain logical limitations besides
any practical or moral ones. 1) They cannot detect result differences when causal
factors are unknowingly proportionally distributed among cohorts studied, which
can and does happen when causally relevant factors are unknown. Unfortunately, it is logically possible, and
sometimes actually the case, that large sample sizes, intended to cancel out
incidental or coincidental factors, can also cancel out or mask actual causal
factors, particularly when relevant, significant, actual causal factors are
accidentally and unrecognized equally or proportionally divided between the
control and test groups, or when they are accidentally and unrecognized
distributed relatively equally among different cohorts.[11] This can
happen particularly when there are multiple causally influencing factors. E.g., in hormone replacement treatment for
women (HRT), early evidence that HRT was substantially beneficial turned out
not to be true when it was given to women a certain age beyond menopause,
rather than the age of menopause. This harm was masked in that group by the
gains made overall by women of different ages when combined in the same
cohort. For randomized control tests to
be effective, it is crucial that the causal factors be divided properly in the
experimental group into cohorts that separate them from each other and that
eliminate them from the control group cohorts.
Unfortunately, when the causally influencing factors are not already
known, and worse, when they are not even suspected, that cannot or will not
likely be done. In the HRT study that
showed age, or elapsed time past menopause, was a factor, it was fairly
reasonable it would be detected in a controlled study, because grouping cohorts
by age is one typically normal way to proceed.
However, if the causal factors that made HRT dangerous for some women
and beneficial to others were evenly distributed throughout all the cohorts,
the overall gains might have masked the dangers for those with the unknown
causal risk factors while simultaneously seeming to lower the beneficial
effects of those without it, since they would be not be in (a) cohort(s) by
themselves. 2) Even when they determine that a treatment might be
safe under known or common conditions, they cannot determine whether an
unexpected new environmental condition will make a treatment dangerous. I.e., some new diet may become trendy that
turns out to put people at risk who are have taken a medication, or who have
had a procedure, that was tested before such a diet was conceived and promoted. 3) The safety and efficacy shown by an RCT is only for
the time period over which the test is conducted. Any longer time periods may show up problems
that do not occur in the shorter run.
That is why Phase 4 and also epidemiological and other sorts of studies
need to be conducted. Corollary to 3) Since evidence never comes clearly
marked as evidence, and particularly does not come delineated as “sufficient
evidence”, the “endpoint” of any RCT is always going to have to be judged by
some kind of reasoning process that says when there is sufficient evidence to
constitute knowledge about whether the intervention is safe and effective or
not. Or to put it into the technical parlance, there needs to be a rational determination
made as to when equipoise is sufficiently disturbed to require no longer giving
the treatment to the experimental group -- because it is then known to be unsafe
or ineffective-- or to require no longer withholding it from others -- because
it is then known to be both safe and effective.
Many of the problems involving what constitutes reasonable equipoise to
begin a study also apply to when sufficient data is collected to end it because
equipoise is no longer tenable; that is, short of obvious mortality and serious
morbidity results during an RCT, of course. These limitations of RCTs need to be taken
into account when using placebos or no treatment in order to intentionally
withhold treatments from people, because these limitations weaken the argument
for using placebos in the name of research under certain circumstances. Insofar as an RCT will not likely yield
sufficient valuable knowledge, there will be certain situations where the
argument for using placebos (and thus withholding treatment from subjects) is
sufficiently weakened to make it ethically unjustified. It is notoriously difficult in human
studies to totally control or even recognize the variety of factors that
represent the (pre)conditions of the study.
Study groups and treatments that seem identical in all but one way, may
in fact involve differences that are unrecognized, and which are hidden by the
size and cohort divisions of the study group.
If we count the (pre)condition of the subject as part of the initial and
testing conditions, it is virtually impossible to test identical subjects
because no two people will likely have the same chemistry or previous
histories, exposures, experiences, etc., let alone pairs of groups of
individuals used as the test subjects and as the control group. Sufficient group size, along with controlling
for the seemingly relevant factors is meant to try to cancel out or minimize
those differences. Otherwise almost any
study would need to be done only on identical twins, and even then there would
be acquired and environmental, if not also genetic mutation, differences. In many aspects of science where we are
unable to find invariable patterns of concomitant or successive conditions, we
still notice statistical patterns. Logically we could say either that there is
an element of randomness involved that has no specific cause or invariably
prior condition or we can say that we believe there is one or more causes or
invariable sets of conditions that precede the effect (or that precede and
determine the immediate cause of the effect) and we just do not yet know what
that is or what they are. Given the
variances between humans, and our still incomplete understanding of physiology,
biochemistry, genetics, etc., much of modern medicine and research involves
statistically correlated, rather than recognizably invariable, conditions. We tend to find apparently causally related
factors, rather than definitive mechanisms or specific, determining invariable
sets of isolable preconditions for states of health. Given no possibility of perfectly matched
control and experimental subjects, and given the potential variety of pathways
and influencing factors, the idea is to come as close as seems reasonable with
available evidence to having matched control and subject groups. That fact is
sometimes lost on those who would overzealously require such impossibly
matching control groups be found that no research can be attempted
(particularly in regard to treatment for relatively rare or “orphan” medical
conditions where it is difficult or impossible to get research and control
sample sizes that meet the critical mass necessary to validate the research
statistically). And it is lost on those
who would have such lax standards of control that the research is invalid. Determining the criteria for a valid and
relevantly matching control group is a matter of reasoning and reflection that
takes into account available evidence and the knowledge and reasonableness of
relevant assumptions. It is not a matter
of slavish adherence to arbitrary or unreasonable rules, or rules that are not
relevant to particular situations. Nor
is it a matter of replacing one set of arbitrary, irrational, or irrelevant
rules for another that is simply more lax. Third, we wish to deny the strict truth of
propositions 7 and 9, because it is not necessarily “the most rigorous
scientific validity”, but the most reasonable evidence, whether strictly
scientific or not, and whether strictly empirical or not (as opposed to
theoretical and logical but based on, and consistent with, empirical evidence). Science certainly makes great contributions
to knowledge, and clearly, empirical evidence is extremely important, but
science is not the only form of attaining knowledge; and moreover, much of
science is not strictly empirical but uses other forms of reasoning as well,
particularly in research involving “theoretical constructs”, which may
generally be described as suspected causes for observable phenomena, but which
themselves cannot be directly observed.
E.g., fossils give evidence for the prior existence of certain kinds of
organisms which cannot now be directly observed to exist. Much of medicine involves indirect evidence
of the existence and workings of theoretical constructs. And much of medicine
involves probabilistic evidence which may imply but also mask specific causal
relationships.[12]
Science also often employs mathematical models whose relationship with the
physical world is not always known, which is why physicists often have to
discuss ideal particles under ideal conditions.
In short, science is not as strictly empirical as it is often supposed
to be, and the empirical nature of science within the larger context of
evidence and reasoning is not (always) fully understood and appreciated. Rather than limiting medicine to what can be
demonstrated scientifically, especially perhaps when the limitations of science
are not fully appreciated, it is more reasonable to amend propositions 7 and 9
to read something like: Proposition
7’) Because of the risks involved,
medical experimentation with human subjects requires the most rational
methodology for gaining knowledge in order to be ethical, because to put people
at risk for no good reason (i.e., to put people at risk in order to obtain
results that are not necessarily rational or meaningful) would be morally
wrong. Proposition
9’) Therefore (from propositions 6, 7,
and 8), the most rational methodology for gaining knowledge is a necessary, but
not sufficient, condition in order for experimentation with human subjects to
be ethical. That means proposition 10 would have to be
changed to something like the following (which will make it even less often and
less apparently true than it is as it is stated above): Proposition
10’) The most rational means of acquiring knowledge requires using random,
double-blind, placebo-control trials in medical research. For an example of the irrational,
unnecessary, costly, mistaken, and egregious use of an RCT, see Robert Truog’s
“Randomized Controlled Trials: Lessons from ECMO,” Clinical Research 40 (1992): 519 – 27. Also see “The Continuing Unethical Use of
Placebo Controls” by Kenneth J. Rothman and Karin B. Michaels, New England Journal of Medicine 331
(1994): 394-98. What the above signifies is that RCTs are
not always ethically warranted or legitimate because they will not necessarily
yield the most valuable or rational evidence that is being sought or because
there will be far less Draconian ways to gain sufficient knowledge. Other methods may be more morally acceptable and
yield as much, if not more, reasonable evidence for the safety and efficacy of
a proposed treatment. We wish to return now to the above
designated “Paragraph A” to propose the more general principle: Randomized
Control Trials, because they withhold treatment from subjects, are not morally
legitimate or scientifically necessary to use when there are other, less likely
harmful, ways to know the information
desired about the progress of the condition under study when no treatment is
given. This particularly includes
conditions whose progress without treatment is already well-known. And it
applies even more strongly as the effects of withholding become proportionally
more serious and harmful in regard to mortality and the significance of
morbidity, especially irreversible morbidity. Paragraph A derives its plausibility from
the fact that medicine already knows what the progress of certain conditions
will be if left untreated, or if treated in the conventional ways. That is the same rationale that gives
“compassionate use” its plausibility in giving patients experimental treatments
which are not likely to make them significantly worse off because you already
know how bad they will be without a new intervention (where “worse off”
includes aspects of quality and “significance” besides just life and death). Moreover,
medicine often already knows in the cases of terminal illnesses the manner in
which death will come; i.e., the physical deterioration that will precede it,
the pain or agony likely to precede it, if any, etc. Similarly with regard to
diseases or conditions which cause harm other than death, whether it is a long
drawn-out process of loss of various physical or mental abilities or whether it
is increase in pain or the effects of something such as arthritis, diabetes, paralysis
from spinal cord or brain injury, etc. Even in cases of less significant morbidity,
it is unnecessary to test for placebo or “no treatment” when those effects are
already known. The whole point of
placebo controls is to see what happens that is different from the experimental
treatment. And if you already know what
happens without treatment, there is no need to withhold a treatment from people
whom it might significantly help, simply for the purpose of finding out what
will happen to them. There may be safety
reasons not to give a new treatment to many subjects, of course, in order not
to put any more people at risk than is necessary to get initial safety and
effectiveness information but that is different from withholding treatment just
to see what happens when treatment is not given. This seems particularly true for physical
(in the sense of non-psychiatric) illnesses or conditions where a placebo
effect seems highly unlikely, as in surgical repairs of the tetralogy of Fallot
for dying cyanotic babies. Some
surgeries, of course, may have various components, not all of whose affects on
the condition are fully understood. But
if one has different trial arms that involve different forms of (the) surgery
that are tried on different subjects, that is not a placebo nor a sham
surgery. It involves numerous real
surgeries of different sorts, with different total components, in order to see
which one is more effective, if any. Any
time one is “opened up” or drilled into by a physician for medical purposes and
then closed, even without anything else’s being done, that is still surgery,
even if it not ‘the’ full surgical procedure being tested as a treatment. And if something in numerous patients’
condition responds favorably to any form of the surgery, then that is an effect
of the that surgery, regardless of the extent of the surgery or the unexplained
surprise of the result. Research into treatments for conditions
involving more of a psychological component may benefit from knowledge of a
placebo effect, but even then that may show the placebos to have some
therapeutic properties (whether positive or negative). Furthermore, in general, particularly for
primarily non-psychological conditions, it seems to us unnecessary to use
placebo control groups for a condition where placebo control has already been
used in studying another treatment with the same kind of delivery system. E.g., if there are two different experimental
treatments to be tested for arthritic knees in similar populations, where both
treatments involve taking pills twice daily, it seems unnecessary that if the
one trial is conducted first, with placebo controls, for the other trial to run
another placebo arm instead of just using the same data from the first trial for
its placebo-control test, and just run the experimental part of the test, to
see whether the treatment is relatively safe and effective (i.e., relatively
more safe and/or effective than placebo). And it does no good to reply that a
placebo control might turn out to have different results in one research study
measured against treatment A than it does in another study measured against
treatment B. If that were the case, as
seems to happen sometimes, then it is really not clear what the placebo control
even means, for research or pure science, let alone for clinical practice. Miller and Brody, in discussing one
particular research trial to argue for the necessity of placebo controls (in
this case, versus active controls) make the following two comments (not
terribly far apart), the first being about the results of the trial in
question, and the second describing the evidence available to the IRB before
they approved the trial: “Neither
hypericum nor sertraline was found to be superior to placebo on the primary
outcome measures.” “Approximately
twenty-five clinically available antidepressants, including sertraline, have
been shown to be superior to placebo.” [ii] Miller and Brody argue that the second
statement shows that IRBs (Internal Review Boards) do not really consider
equipoise as being necessary to do research on human subjects, because if they
did, they would not have allowed the placebo control arm of the study, since
the active control was already demonstrated to be superior to placebo and there
was no “equipoise” any longer existing about that. There are other possible reasonable explanations
for the potential rationale of the involved IRB, but the curious aspect of
these two statements is that on the surface they are incompatible in a way that
needs to be explained, for in general if S is superior to P, and P is
equivalent to H, then logically, S must be superior to H. Moreover, S cannot logically be both superior
to P and not be superior to P; but RCTs Miller and Brody describe supposedly
show it to be both. It is possible that the “primary outcome
measures” for the study under discussion are different from what was studied in
the previous studies, but, if so, that would obviate Miller and Brody’s point
that equipoise does not exist (for the primary outcome measure under
study). It is possible that the IRB did
not trust the previous study results, but that seems to obviate the point of
depending on research studies or of including placebo controls in them. Neither of these explanations will help Brody
and Miller. But more importantly, if the
outcome measures are the same in this study as were tested previously, then it
means that placebo controls give different outcomes in different studies; and
that should be troubling for anyone who holds that random placebo-control
studies are a meaningful, essential part of research on human subjects. It seems more likely to us that the whole point
of placebo controls is to try to determine whether the results of the treatment
arm of the trial really show safety and effectiveness of the treatment, or
whether some significant part of the result can be attributed to just being
administered to in a research study, even with a supposedly inactive, inert, or
irrelevant agent. As explained
previously, Freedman makes the strong case that placebos do not allow one to obtain
a “net therapeutic effect” of an experimental treatment. And the Miller/Brody example seems to
indicate that placebo control studies are not consistent, or not carefully
enough described, understood, or interpreted to warrant their being considered
absolutely necessary or sufficient for determining the net therapeutic effect
of a treatment. This is not to say there is no
significance to the results of a placebo control. It is to say that placebo studies need to be
understood better themselves, and their significance needs to be more carefully
considered within the broader context of trying to determine how much effect
(for benefit or harm) an experimental intervention really has. Placebo controls can likely contribute to
that understanding, but not in as obvious or straightforward way as is usually
assumed. There are numerous articles available that make good cases for when
placebo controls are justified and when they are not, and why. Plus it seems obvious to us that if the progress of a
condition is already known when left untreated and is also known when treated
with different previously used medications or procedures, then it should be
reasonable to believe that any significant diversion from any of those paths,
when the delivery method of the new treatment is the same as has been used
previously (e.g., pills or injections in the same location, similar frequency, etc.)
is to be attributed to the treatment, giving a net therapeutic effect. This would be a net therapeutic effect for
clinical purposes even if some sort of (especially permanent) “placebo effect”
is part of the cause of the treatment’s success or failure. Placebo or non-treatment seems more important and
reasonable when the progress and/or outcome of a condition is not already really
known, and/or where the effect of standard accepted treatment is not already
really known. Unfortunately the effects
of many “standard treatments” are not necessarily known as well as they could
and should be. In such cases, a
placebo-treatment arm or, we think more importantly, a non-treatment arm can be
justified if there is also no reason to believe significant additional harm
will befall the subjects because they are in that arm. But to repeat, the only point of placebo or
non-treatment controls is to see what the net therapeutic effect of an
experimental treatment is – and insofar as that can be determined in a rational
way without withholding treatment
from research subjects for which an experimental treatment for a significant
condition seems promising, it would be wrong to put them in a non-treatment or
placebo control group, because the results of that are either already known (in
the case of non-treatment) or they are difficult to interpret (in the case of
significant placebo effects that have never shown up before with the same kind
of delivery system used previously in studying that condition). Note, it is the purposeful “withholding” of treatment
for which there is great evidence of safety and efficacy that causes the
original ethical dilemma for researchers in general and for physicians in
particular. It is not merely “not
providing treatment” that is ethically problematic, but intentionally not
providing available treatment which could be provided that is morally
problematic. Physicians were not
culpable for not successfully treating those with bacterial pneumonia before
penicillin was discovered. Physicians are not responsible for not treating
people they do not know are ill. They
are not responsible for not treating those who suffer heart attacks miles from
civilization while hiking alone in the mountains or desert. They are not responsible for not using
equipment, procedures, or medications they do not have available to them even
if they are in the desert or mountains with the person who suffers an illness
or injury. Even researchers are bound by ethics in general (as is
everyone) not to harm innocent people without some justifiably overriding
reason. And everyone is bound by ethics to help those they can when it does
great good for the beneficiary without an incommensurately greater cost or risk
to the person doing the act. Physicians do
have an added obligation to help
others medically beyond the minimally required effort that everyone should
make, because they have taken an oath to do so, because they have the knowledge
to do so, because they accept that additional obligation in offering to treat
people who come to them, and because in many cases the public helps support
their medical education in return for having physicians available to provide
for them, and because physicians are often accorded certain financial benefits
that incur obligations with them, just as any service for fee does. In short, there are a number of obligations
over and above those everyone has, which people (including physicians) can and
do incur, and which they are then responsible to honor. Life is full of such additional obligations,
for example, keeping one’s word to do something simply because one gave it,
even though one might not have had to do the act if one had not said s/he would
and even though one did not have to say s/he would in the first place. But promising or committing to others that
one will do something which they then depend on you to do because of your
commitment, bestows a prima facie obligation then to do it. Most verbal commitments, promises, vows,
oaths bestow such additional prima facie obligations that have to be honored
unless there are morally overriding circumstances that justify or even require abrogating
them. But even without voluntarily incurred additional obligations, everyone
still has prima facie obligations to
do no harm, and to be reasonably helpful to others. Moreover, medical researchers, by virtue of their
expertise, and by virtue of their seeking or accepting people into their
programs, we believe, have an added obligation to look out for their subjects’
best medical interests as it pertains to the study. It is not physicians alone who have this
additional obligation. Nor does merely
acquiring consent allow one to mistreat another individual. This is particularly true, of course, if the
consent is for the incurring of some reasonable risk, not just any risk, such
as due to malevolence, insensitivity, negligence, etc. And even if someone were
to be so irrational as to consent to being mistreated, that does not give
anyone the actual right to mistreat them just because of that consent. If a patient trusts a physician or a subject
trusts a researcher and gives carte blanche approval to do what they think is
best, that does not give the physician or the researcher moral justification
for mistreating them to serve his/her own ends, any more than entrusting one’s
finances to a financial planner with the proviso to do what s/he thinks best,
is an invitation or advanced pardon for them to embezzle the assets. The Purpose of Informed Consent There are a number of purposes for requiring the
informed consent of research subjects: 1) It is supposed to signify that the research subject
understands and agrees to what s/he is “getting into” or to what risks s/he is
allowing him/herself to be exposed, and for what possible gain if any or what
possible beneficial results there might be for medical science and future
patients. Studies have shown this purpose
is not often met, because research subjects often assume, no matter what they
have been told, that the researcher or their physician will not allow them to
be harmed, and they assume regardless of their intellectual understanding, that
they will receive the treatment they need, and that they personally will not be
given a placebo even if placebos are given out randomly.[13] 2) It affords the opportunity for the potential
research subject to let the researcher or the physician know which potential
outcomes s/he might find acceptable and which ones might be unacceptable to him
or her, in order to double check with the researcher or physician that this
research meets or has a reasonably good chance of meeting those expectations
and not likely causing significant unacceptable harm. Meaningful informed consent can meet that
purpose, but formal procedures for obtaining informed consent might not, and
probably won’t, particularly if they are written in legal and scientific or
medical technical language that is unintelligible to the population which has
to sign them. But even when the language
is understandable to the subject population, that does not mean they will
understand the significance of the words or statements even if they can restate
the provisions of the agreement in their own words. One can imagine someone’s taking a drug for
erectile dysfunction after consulting with their physician and after having
seen a million ads for it on tv, still saying after an ocular blood vessel
occlusion “Well, yes, I knew it could cause sudden ‘loss of vision’, but I
didn’t know I would be blind! That I
would end up like this! – unable to
get around, go to work, drive, watch tv, read, and…..” It is our contention that the researcher has
an obligation to make sure the research really is in the subject’s best
potential interest and that it is not
potentially devastating to the subject.
And it is our contention that the researcher has an obligation to make
sure the subject understands the risks, the probilities, the magnitudes, and
the significance of the magnitudes! And those things are not accomplished by
merely stating facts to the subject and then obtaining the subject’s signature
on a form. So although a verbal
statement of the facts and a signature on a form may be necessary, it is not
sufficient for making sure that a researcher is not exploiting or taking
advantage of the subject, whether purposefully or inadvertently. 3) It meets certain legal requirements to absolve the
researcher from charges of battery, or from non-negligent iatrogenic injury, though
not from negligence that causes harm.
From a legal and somewhat cold or calculating standpoint, it also acts
to make the volunteer subject essentially complicit in the result so that s/he
“cannot complain” about it later. But
morally that would not follow if the research procedure is not as good as it
should be, even if it meets an approved protocol, or if the explanation to the
subject prior to their consent was not as good as it should have been, even if
it meets the legal requirements for the “informed” part of informed consent. 4) It signifies, if the consent is truly or validly
informed, that the research subject is volunteering of his/her own volition for
the research study. It may or may not
achieve this purpose depending on the understanding of the subject, which we addressed
above in terms of the subject and the researcher both understanding the
signficance of the possible outcomes besides their probabilities and magnitudes. It perhaps is safer to say it shows the
research subject is not being acted upon in a way that is against their will,
but it does not show they are not being acted upon in a way that would be against their will (if they
understood it better or after they see the outcome). 5) It allegedly proves the research subject is not
being exploited or taken advantage of.
It cannot do this, however, except under certain circumstances which do
not always obtain, nor which can be demonstrated in any formal way to obtain. In the context of “coerced” compliance to join a
research study, Franz J. Ingelfinger[14]
argued: “[I]t must
be granted that natural contingencies (“acts of God,” things which come to pass
naturally, those contingences which we cannot hold anyone responsible for) do
not render a person unfree, nor do they render unfree the choices which a
person makes in light of those contingencies.” If one holds that uncoerced cooperation precludes
exploitation, this might reasonably be interpreted to imply that no one is
exploited who consents to reasonably potentially beneficial research studies
because they have a condition resulting from a natural contingency (e.g.,
serious illness, accident, genetic condition, etc.) That would be untrue, however. Take the simple case of your car’s breaking
down in a small town with one mechanic who, upon seeing your out of state
license plates, and ascertaining that you are on an important business trip for
which time is crucial to you, quotes you an exorbitant fee for making what is a
fairly simple repair for him for which he would charge townspeople much
less. Your predicament is due to a
natural contingency in the above sense, but nevetheless because the mechanic
has you “over a barrel”, he is exploiting you, or attempting to exploit you. Of course you are free to forfeit the
business deal by turning him down, so his exploitation is not guaranteed. And you are not necessarily coerced into
accepting it. But we think it is
reasonable to consider this a case of coercion, and certainly of exploitation,
if you feel you need to accept because you need the business deal and it will
still be very profitable for you. You
have a certain, miniscule amount of freedom in the matter, but it might be
irrational in this case to reject his services, even though you know you are
being taken advantage of. 6) Informed consent is alleged by some, mistakenly we
have argued above, to absolve researchers or physicians from treating the
subjects with a primary emphasis on beneficence and non-maleficence because
they are then instead supposedly administering to the patient/subject with a
primary emphasis on meeting his or her wishes and respecting his or her
autonomy. Specifically it supposedly absolves physicians from either treating
subjects, or recommending participation in research that treats subjects, in
ways that are not necessarily in the patient’s best interest if that is what
the patient wants or is willing to do. Even
in medical practice, sometimes patient autonomy and medical
beneficence/non-maleficence conflict, and the recent trends in medicine place a
higher value in many cases on respecting patient autonomy than on a
professional expertise, often considered “paternalistic”, approach of treating
(or wanting to treat) the patient in a way that is in his/her best interest
purely from the standpoint of medical health, whether the patient accepts that
or not. It is our contention that whether one exploits
research subjects is not just about how one behaves toward them but also
involves one’s state of mind and intentions toward the subjects. If one sees research subjects primarily as
facilitators of his/her own research goals, no matter how laudable those goals
might be, rather than as human beings who deserve to be attempted to be helped,
one is exploiting them even if one is doing the very same thing as one who sees
his/her mission as trying to help the person, and who is not exploiting or
taking advantage of their conditions.
Apart from conforming to certain necessary standards of disclosure, it
is the researcher’s compassion, understanding of his/her subject, his/her
sensitivity in general, and integrity in general that determine whether s/he is
exploiting the subject or not. This is
not to say that an observer cannot tell whether a researcher is exploiting
his/her patients or not. It is to say it
takes more work for an outside observer (such as an IRB) than merely collecting
signed consent forms that have been approved by attorneys or even
ethicists. It involves knowing the
sensitivity and integrity of the researcher. More importantly, we believe that
a researcher who really cares about a subject’s wishes in regard to his or her
potential treatment outcomes will likely (though not necessarily) do more of an
ethical nature to try to insure those wishes than will the researcher who
merely views the subject as an opportunity to gain knowledge and personal
benefit from discovery or publication.
It is not that the one will be any more successful necessarily than the
other in securing medical benefit for the patient, but that the former will
simply more likely (but not necessarily) treat his/her subjects more humanely
and ethically in the process. Now Jay Katz[15]
has devised a list of minimally necessary disclosures to insure obtaining valid
informed consent. Among these are information
that the research subject’s therapeutic interests will be subordinated to
scientific interests; that they may be harmed by the research instead of being
helped; that other options besides participating in research may be more in
their self-interest, etc. While such
lists are important to prevent leaving out of the informed consent process something
previously realized to be important, these lists do not preclude the
investigator’s having an exploitive frame of mind, that might put the subject in
more jeopardy that is unacceptable than necessary to him or her no matter how
many provisions are formally met in terms of the letter of the law or considered
ethical standard of the time. We suggest that while such lists are important, it is
equally important that the researcher genuinely cares about the well-being and
best interest of the individual subject in order to more likely notice
potential significant risks and effects not covered in the lists, and that the
researcher makes sure that s/he and the subjects understand not only the probabilities
and magnitudes of the possible outcomes, but their significances for the
subject as well. And this is an ongoing
process even after the treatment starts, because the subject is allowed, and
should be encouraged, to withdraw from the study at any time the results appear
to be causing or tending toward a negative significance. All this requires
sensitive insight and effort by the researcher, not mere formality, but what is
required is a reasonably normal moral and humane effort, not a superhuman,
perfect one. Implied and Substitutionary Consent We pointed out earlier that obtaining valid or
legitimate informed consent was a sufficient condition to meet the necessary
condition of not treating people in ways against their will or in a manner that
would be against their will. There are
emergency conditions where it is impossible to get consent from the potential
patient/subject, however, such as in emergency situations where an experimental
intervention must be done within a limited time frame if it is to be effective
at all. The patient may be in no condition to give a meaningful or valid
consent (or possibly even a valid denial).
And there may not be available a relative or legal representative who
can give consent. If there is, they may
not be able to weigh the pros and cons of accepting or rejecting the
intervention and may want to rely on the physician’s best judgment. Emergency rooms normally are permitted,
unless there is clear evidence to the contrary, to presume patients would want
to be treated; and in some cases (e.g., where a patient is in pain and
traumatic psychological despair) may even disregard pleas to be allowed to die
untreated. So valid informed consent
from the patient may not be necessary to enroll them in a research study for an
intervention, say, following serious injury where the intervention has to be
administered in a timely manner that precludes getting informed consent,
particularly if there is good reason to believe it will be helpful and little
reason to believe it will be harmful. This is one area where IRBs might legitimately substitute
their judgment for that of the patient or might allow the physician to
substitute his judgment for that of the patient. But that is because no matter what is
decided, it cannot be decided in a credible (or perhaps even possible) way by
the subject/patient. So no matter what is done, the decision will, of
necessity, be made by the IRB and/or the treating physician. The more usual case however is that physicians,
researchers, and IRBs guiding their practices in obtaining informed consent,
act as filters that control the information given to recruits for research
studies. The information that is given, and the manner in which it is given,
particularly the likely reliance on the researcher’s and or physician’s advice
by the patient, will influence the consent decision that is made when a patient
or his/her health care proxy is able to be consulted. So, in part or to some degree, researchers,
physicians, and IRBs cannot avoid substituting their judgments for the
potential subject’s judgment, though it is not totally the same thing as
deciding on behalf of the subject whether to participate in the trial or not.
In some cases it may be tantamount to the same thing. Hence, there is a strong obligation for IRBs
and researchers to understand the potential subjects’ perceived needs and
significance to those potential subjects of the different options, so they know
what information to provide, which is basically the information that even if
the worst outcome occurs, the patient/subject or his/her health care proxy (or
most sensitive, rational people) will not feel something significant was
withheld in the consent process that should have been divulged and made crystal
clear. Informally, this might be
expressed as the consent process should be one that would be found to be
acceptable and not likely to be embarrassing or disgraceful if it were to be
shown on 60 Minutes after some
tragedy. In regard to projects
involving vulnerable patient populations, IRBs must have members who are either
part of those populations or whose expertise involves them. But still, the patient has the opportunity to
ask further questions, seek other counsel, and perhaps reject the advice of the
physician or researcher. However, Truog et al have suggested a model that
includes what we call substitutionary consent for certain kinds of cases,
whereby research subjects would not have to give specific (as opposed to
general) consent for particular treatment arms or options[16].[iii]
One of the requirements of their proposal is that “no reasonable person should have
a preference for one treatment over any other, regardless of the differences
between the treatments being compared.”
They recognize, however, “[a]lthough the reasonable-person standard is
widely used in the law, it is far from perfect.
For example, there is always the possibility that a patient may be
unusual in ways that cannot be anticipated and that would lead the patient to
have a preference for one treatment over another.” But this is precisely why informed consent,
when it can be given is necessary, because there is no reason to believe that a
patient will be willing to be treated in a way that his physician, a
researcher, or IRB members from the community who are presumably supposed to
represent “everyman’s” thinking in regard to what they would want, perhaps ala
the Golden Rule. It is one thing for the
members of the community review board to heavily influence the decision a
research recruit might make by how they structure the informed consent
information required to be given; but it is quite another, we believe morally
inappropriate act, to make that decision for the subject when the subject could
reasonably make the decision him/herself, which is basically the review board’s
saying “we didn’t think you’d mind.” But informed consent is meant precisely to
find out whether the patient or subject would mind. The Philosophical/Ethical Function and Justification
for IRBs/ERCs Notice that nothing in the propositions 1-22 mentions
Institutional Review Boards or Ethics Review Committees in regard to conducting
research ethically on human subjects. Yet, Institutional Review Boards or Ethical Review
Committees are required by the Common Rule[iv],
by the International Ethical Guidelines for Biomedical Research Involving Human
Subjects[v],
and by The ICH Harmonised Tripartite Guideline – Guideline for Good Clinical
Practice[vi],
and their duties are listed in these documents.
But the point of using them as a primary way of protecting human
subjects, in accord with other provisions in these documents is apparently
presumed obvious. We believe that leads
to problems that need to be addressed, because the justification of IRBs and
ERCs is not so obvious as it might appear; and the underlying functional role
they thus (should) play in protecting human subjects is not as clear as it
needs to be. It is fairly clear that IRBs and ERCs are intended to
be a check and balance for overly zealous, overly optimistic researchers, and
to guard against biases, insensitivity, cruelty, reckless or selfish disregard,
conflicts of interest, or anything else that would cause or allow the
exploitation of human research subjects and/or that would cause or allow the misuse
of data collected and interpreted. The
question is how they are to do that, and why they are necessary if there are
guidelines, rules, codes, standards, laws, etc. already in place. They are not merely intended to be police who
enforce these rules, though they are to enforce the rules. But they also are meant to interpret the
rules in order to apply them to specific situations; i.e., specific research
projects. But what insures that the IRBs or ERCs do this right
or that they do it any more correctly than would individual researchers with insight
and integrity who are the ones designing and proposing the projects in the
first place? In Nazi Germany, would not IRBs have simply been
populated by like-minded people as those who did the heinous research? Would not a simple legal requirement simply of
obtaining “informed consent” have been sufficient to prevent the atrocities
done if the society had really cared to protect human subjects, particularly
Jews and other non-Aryans? At the time
at least that the Tuskegee study was started, would not an IRB have been as
accepting of the beliefs of the time as the researchers were? What exactly, besides much labor and expense,
does an IRB or ERC add to the process that ethics rules and laws, with suitable
penalties do not? And what if the research suspicion in the Tuskegee
study had been right that syphilis in the African-American population was not
harmful, as it is in the Caucasian population and that the risky treatments of
the time (prior to penicillin) would have been unwarranted? Is it not just the
hindsight result that syphilis acts the same in the black population as it does
in the white population, that makes the idea seem so egregious and racist to us
now? Or was it a racist idea? Is it racist to be concerned today about how hypertension
or tuberculosis in a population of African descent might be different from the
way it is in regard to a population from European descent? Or are there genetic
and/or historically induced natural selection, adaptation factors of native
diet, lifestyle, or acquired immunity from epidemics centuries ago in Europe, that
make these two different populations different in regard to hypertension and
tuberculosis? But more important for the issue at hand, why is an IRB more
likely to decide these matters better than researchers or legislators or
judges? And why, if an institutional
review board is affiliated with an institution doing research, should they be
less optimistic or zealous at wanting to see successful research projects than
the individual researcher or department designing, promoting, and conducting
the research? What exactly is it about IRBs and ERCs that is
supposed to make research on human subjects that they approve more ethical or
better than if qualified researchers were left to conduct research on their
own, and if researchers were punished in court or by those who license or
employ them when they treated subjects immorally and/or violated codes of
research? We believe there are a number of important elements in
the rationale for IRBs and ERCs; and we believe that understanding those
elements helps explain how IRBs and ERCs should
function. It seems to us there are four underlying ways in which
IRBs or ERCs are meant to provide protection for human research subjects: 1) They serve the idea that collective knowledge and
wisdom is better than individual knowledge, particularly when members are from
different backgrounds. IRBs and ERCs
typically have to have not only members with scientific interests and medical backgrounds,
but also at least one member outside of science and one member not part of the
institution. Insofar as “vulnerable” populations are the subject of any
research, members of that population or those familiar with their needs, are to
be on these committees or boards. The
point is to try to make it more reasonably likely, first, that the study under
consideration is scientifically valid and meaningful so that subjects will not
be exposed to risk or harm for no good reason or actually useful purpose, but
that meaningful knowledge will likely come from the study. Second, inclusion of the non-scientific
members is to try to make it more reasonably likely that the moral or “human
needs” of the subject population are duly considered and met, and are not
ignored or given a merely secondary importance to the scientific search for
knowledge. In terms of the
deontology/utilitarianism dichotomy, the point is to use collective wisdom of
board/committee members to make it more likely that the most knowledge can be
obtained without abrogating subjects’ rights. 2) As stated earlier, the collective wisdom is also
supposed to balance and restrain any part of the potentially problematic part
of the protocol that is the prompted by an overzealous or overly optimistic view
the researcher has, whether it stems from intellectual enthusiasm for his/her
own idea, from his/her intense curiosity about the topic, from career ambition,
financial profit, or any other conflict of interest that blinds the researcher
to scientific, ethical, pragmatic, or other problems that need to be addressed
and remedied if they are found to exist. 3) They are supposed to make sure the rules and
guidelines are met by researchers, thus serving a policing function. 4) They are meant to interpret rules in regard to
particular application, and we contend, they should be allowed to add new rules
or permit exceptions to existing ones under certain circumstances we will
explain momentarily. We contend they
should afford rational flexibility in order to make sure there is ethical
compliance with what is right, not just legalistic, compliance with a set of previously
prescribed rules that may have loopholes or that may be irrationally
restrictive under particular circumstances. IRBs also serve as a watchdog committee for the
institution to prevent the institution’s liability in being associated with,
conducting, or condoning unethical research, but that function is served by
meeting the first four and might be considered a derivatory function; and it
has less to do with protecting research subjects than with protecting the
institution. Meeting the first, second, and fourth functions
requires having a board or committee whose members are knowledgeable,
sensitive, rational, and logical and articulate enough to be able to teach and
learn from each other. That does not
always happen. And clearly, credentials
(including membership in the subject population) do not always guarantee
it. Often
researchers in a field, particularly perhaps a field studying “orphan” or less
prevalent medical conditions feel that the scientists and physicians on the
board or committee do not fully appreciate (the importance of) the research
proposals brought before them. In some
cases members of a subject population, who serve on the committee, may be very
atypical of the research population and thus not have the necessary insight about
the subjects which they need to be able to share with the committee or board. And one of
the criticisms often of “ethicists” on review boards or committees is that they
are of no help because either they cannot explain principles in a way the other
members can understand them or see their applicability, or the ethicist
him/herself cannot actually apply the principles to the cases at hand. Some
ethicists even suffer “paralysis by analysis” and cannot seem to take a
meaningful position that could help crystallize agreement or disagreement with
it. Hence, we propose that there be some sort of appellate
or review provisions or system in place for researchers who wish to appeal a
decision of their particular review board, and for members of a board or
committee who believe the majority has made an error in approving a project or
approving it without additional security measures. Also, organizations for practitioners in
those medical fields not likely to be represented on review boards or ethics
committees might want to form their own ethics advisory committees whose prior analyses
of proposed research projects might have helpful influence on IRB or ERC
deliberations. Codes, Rules, Laws, Policies, Guidelines, and Judgment In regard to function 4, the reason that IRBs and ERCs
should have flexibility is that ethics, like science, is something of a
“bootstrap” enterprise in that rules in ethics, like laws in science, are
codifications and collections of the state of best ideas based on the logic of
the evidence known at the time. As new phenomena are encountered or discovered
and as new perspectives about known phenomena are conceived, rationality
demands that the new evidence be taken into account, and if necessary,
scientific laws and methods, and ethical principles, be modified. Ethical principles and scientific laws are
meant to be collections and codifications of what is most reasonable to
believe, given previously known evidence.
They are not meant to be calcifications of rules that deny the
significance of new evidence, perspectives, or arguments. Unfortunately, while
that sounds simple in theory, it is difficult in practice to decide or agree when
new evidence or perspective warrants new rules or laws, and whether the new
rules or laws accurately capture in their wording the essence of what is
discovered. Rui Wang, et al express this
well in an article in The New England
Journal of Medicine that proposes and explains guidelines for reporting
statistical subgroup analyses in The Journal: “As always, these are
guidelines and not rules; additions and exemptions can be made as long as there
is a clear case for such action."[17] It is very difficult, particularly in ethics, as in
law, to capture in words the exact insight one has in mind. It would be most helpful for principles to be
explained as fully as possible when they are formulated not only so that others
can understand what is intended but so that one can more likely see oneself
whether the formulation is flawed. But
modern society tends to have a penchant for stipulating rules rather than
explaining and justifying them, which turns out too often to be not very
helpful as a guideline for understanding what the rule or law really is
intended to mean or convey. In law, the courts often have to spend considerable
time, effort, money, and resources to figure out what probably should have
simply been explained in the first place – the intention and “spirit” of the
law-- alongside the law when it was passed, in order to prevent the colossal
waste of time and resources later. In formal systems, such as law, sports or other
institutions with rules, or written codes of professional ethics (as opposed to
actual ethics or moral philosophy), the outcome is considered to be correct
when it results from following the rules and procedures accurately. But morally that is insufficient when the
rules or procedures are flawed and/or inappropriate in the first place.[18] It is just as wrong to follow a bad rule as it
is to follow an immoral order, and as it is to ignore a just rule. The one possible exception to not following a
bad rule is if not following it will lead to (significantly) more harm or
abrogation of rights than would following it.
But when following a rule leads to morally wrong results by causing
harm, abusing rights, or by preventing good, then following it just because it
exists is not the morally right thing to do. “We were just following the rules”
is no more moral than “We were just following orders.” The Misconception About “Therapeutic Misconception” In the paper “False Hopes and Best Data”, Appelbaum et
al describe, define, or discuss at least two different meanings of “therapeutic
misconception”. The first concerns a
patient who has consented to become a research subject after it is carefully
explained to him that he may receive a placebo and other aspects of the way
research is conducted: “Yet when
the patient is asked why he agreed to be in the study, he offers some
disquieting information. The medication that he will receive, he believes, will
be the one most likely to help him. He
ruled out the possibility that he will receive the placebo […]. In short, this
man, now both a patient and a subject, has interpreted, even distorted, the
information he received to maintain the view […] that every aspect of the
research project to which he had consented was designed benefit him directly.” The second description simply states the factual
elements about research protocols that show the difference between them and
clinical care a patient would receive from a physician dedicated to treat
him/her as best the physician can: randomization, inflexible medication
quantities and delivery times in spite of indications more or less medication
might better serve the patient, researchers not being able to monitor the
effects of the treatment as the research is in progress, etc. The misconception then is supposedly that the
research subject believes that he will receive the same kind of medical care in
the study he would receive when being administered any valid treatment option
by a qualified physician, even if the subject realizes that not every step of the research trial is
designed specifically to benefit him. We agree that the first example is a serious misconception
that needs to try to be avoided insofar as possible. Research literature and
clinical experience present various better and worse ways to do that. We wish however to discuss the second
description, because that presents more of an ethical policy or moral
philosophy issue than does trying to figure out and guarantee the most
practical methods to try to prevent misunderstanding by research subjects. It is our contention, in light of what we have argued
so far, that from an ethical standpoint, the subject’s belief that s/he will be
treated as a patient, even though this is an experimental treatment, is a
reasonable one, and that when research deviates from that concept, it is the
research that is unethical; and it is the research that is (scientifically or
ethically) unreasonable and mistaken, not the patient. First, Applebaum and his colleagues do point out that
“There are at least two reports in the literature of physicians’ reluctance to
refer patients to randomized trials because of the possible decrement in the
level of personal care.” But surely this
is a common occurrence. Anecdotal
information from physicians in conversation about this generally quickly points
out cases in which physicians have advised patients not to participate in
trials when they think it will not serve the patient well. They consider that their ethical duty as
physicians, not much different from advising their patients who ask them, not
to try unproven and likely sham alternative medicine approaches, particularly
ones that may be expensive and harmful, let alone disappointing. Physicians
also routinely advise patients against medications they might inquire about
that they have seen advertised on tv or that the patients’ friends have
recommended. In some cases they give the reasons; in others, they simply say it
would not be a good thing for the patient.
Moreover, researchers and physicians will often remove
subjects from a study when they see a serious decline or signs that it is about
to happen. They do not need to know
which treatment arm the subject is in at the time. It is not that research subjects are not
being sufficiently monitored (ideally); in fact, in a research study, they may
be monitored more closely than they would be as patients in a physician’s care
(apart from being hospitalized). It is
only that the monitoring does not include knowing what the causal factor that
is precipitating the decline might be, whether treatment or placebo, or
something coincidental and unrelated to the study at all. So it is not that a patient’s best interests
are being ignored in an ethical research study – particularly with regard to
harm being intentionally and knowingly caused or permitted (to continue). Furthermore, while some, such as Miller and Brody,
argue (in our view, mistakenly) that people consider the phrase “therapeutic
research” to mean research that is therapeutic, it pretty clearly means
research into potential therapies, which is a significant point. The “therapeutic” in “therapeutic research”
means that therapies are the subject of the study, not a quality of the study. The distinction between therapeutic research
and non-therapeutic research is that the latter is not research that fails its
goal (to be therapeutic) but is research intended to find out information
perhaps unrelated to therapies, such as surveys intended to elicit causal
commonalities among people who may have a disease or condition. It may also be that “therapeutic research” is
a phrase used to describe not only research (such as a retrospective study)
about a past experimental treatment, but is normally applied to a new study in
which an experimental treatment is being tried as the central element of the
study. At any rate, therapeutic research
is research into potential therapies, not research that is by its nature meant
to be a therapeutic activity, in the way that “therapeutic massage”,
“therapeutic gardening” or a vacation or afternoon off might be by those who
find those activities restorative. The significance of that is that the whole point of ethical
therapeutic research is to find something helpful for the patients with that
condition, some of whom will be the subjects in the trial and who will be the
first to be helped if the experimental therapy is successful. Proposition 2 at the beginning of this paper
“Medical progress benefits future patients with the condition for which
progress occurs” is sometimes construed to mean that only future patients benefit or are intended to benefit from
research, not the patients who are in the trials. That seems simply false in most ethically right research; and when true,
is normally unethical – as when vulnerable populations (e.g., the poor, the
imprisoned, the desperate) are the subjects of experimentation with no plan to
continue to help them or members of the group to which they belong if the
therapy proves to be beneficial, but to use it to help others instead,
particularly others who will make its development highly financially profitable.
There is at least one kind of situation where a person
undergoing research as a research subject is not the one who will likely
benefit from the experimental procedure or therapy, but that subject’s
participation then is still ethically right if they understand that and still
consent, and if other conditions are also met.
That is the case in something like a pioneering radical surgery such as
a heart transplant in someone who is about to die soon and who is suffering
with no prospect of relief or worthwhile quality of life with standard
treatment, but where it the experimental procedure is pretty sure to fail in
some unknown way that is necessary to discover in order to try to promote
progress for the benefit of future patients.
Such a trial is ethical only insofar at least as the subject understands
this and its significance for him/her, and insofar as the amount and quality of
life potentially forfeited is not of any real significance to the patient
volunteering to be a subject. This is
the kind of case where the likelihood of failure in the first attempts is
considered to be greater than usual for experimental therapies, but the
benefits for future patients holds some promise. But this is not the usual situation in
therapeutic research. The usual
situation is research into a therapy that is expected to benefit the research
subject[19]
as well as future patients with the condition, though future patients may also
benefit more as additional information is learned as time goes on. Normally, there must be some reason to believe an
experimental intervention will be helpful and not unduly harmful before it is
tried on a human subject. There may be
prior observations of success (as when a valid intervention for one purpose is
seen to have beneficial “side effects” for other conditions that the patients
for whom it is approved happen to have along with the condition for which the
intervention is approved. There may be
good theoretical reasons to expect success, based on, say biochemistry or
genetic principles, and/or there may be prior animal study successes. There may be some other kinds of evidence
that justify trials on human subjects as being potentially successful. All this being said, we wish to raise, but not try to
fully analyze, one particular issue of concern to us in regard to research on
human beings – conducting research whose risk seems to outweigh its beneficial
value, but for which there is a “demand” or “market value” for the product if
it is successful. There are a number of conditions where the potential harm of
the therapy (whether experimental or even approved) is significantly worse than
the condition on any objective level, but where people will want or use or be
willing to try the therapy anyway. This
is a case where (they believe) the condition they have is significant enough to
warrant risking some other harm. We have
presented “significance” as being personal and subjective, but it seems to us
that there is a limit to that subjectivity, in the same way that any act has
limited justification simply on the grounds of autonomy. We have written previously[vii]
about such limitations in regard to autonomy, and two salient examples of
limiting autonomy would be not letting friends drive drunk and not letting someone
commit suicide over a temporary despair that seems permanent to them at the
time. There are objective features of
such cases that make overriding autonomy not just mere paternalism or merely
imposing one’s will on others. One such feature we will mention here is strong
reason to believe that if the negative effect occurred, the person would then seriously regret having had the
treatment, even though they beforehand cannot imagine having such regret. One way to state the problem is that it arises from a
conflict between the two qualifying conditions in proposition 16, because what
someone wills (or believes s/he wants) prior to treatment may not be what they
“would will” or be willing to accept after the treatment. The agent making the
treatment available then has to decide which option is best because either they
will have to honor the person’s current will or the person’s likely future will
(based on the evidence of resulting past cases where the current will was
honored), so they will violate proposition 16 as it is stated no matter what
they do. This is not a matter of
overriding, ignoring, or disrespecting the person’s autonomy because the
question is which person’s autonomy to honor – the “before” person or the
“after” person, when there is no way to honor both because the “before” will
and the likely “after” will are contradictory. In such cases, professional
benevolence will likely opt for the “after” person’s likely will but will still
be considered arrogant, condescending, paternalism by the “before” person or
even by the “after” person if the physician or researcher is mistaken. On the other hand, if the “before” person’s
will is honored, many will deem the act to be one of self-serving, pandering,
exploitation of the subject, rather than honoring autonomy or merely “meeting a
demand by providing a desired service.”
Caught in this dilemma, it seems that a physician’s oath (and common
decency) requires his/her choosing the option most beneficial or least harmful
for the patient because the will of the before person and the likely will of
the after person cancel each other out. An explanation will be required even if it is not
convincing to the before person, but sometimes a demonstration of one sort or
another can be given to them that helps make the explanation more persuasive,
whether it is testimonials of prior patients, video of their reactions, or in
some cases a physical temporary demonstration to the person of what it would be
like for him or her if things went awry, allowing them to see at least some
semblance of the consequences for themselves in order to try to heighten their
awareness of the significance to them of those consequences. The Inflexibility of Research Protocols We also wish to take issue with proposition 11 at the
beginning of this article, because it is not clear why a Procrustean protocol
is required to give evidence of clinical safety and effectiveness when clinical
medicine is not practiced in a Procrustean way. “One of the
problems presented by much research designed to determine the safety and
efficacy of drugs is that this activity is much less experimental than the
practice of medicine. It must, in
general, conform to the specifications of a protocol. Thus, the individualized dosage adjustments
and changes in therapeutic modalities are less likely to occur in the context
of a clinical trial than they are in the practice of medicine. This deprivation
of the experimentation ordinarily done to enhance the well-being of a patient
is one of the burdens imposed on the patient-subject [in] a clinical trial.”[20] It is not clear, for example, why a pre-determined
dosage, if excess seems pretty clearly to be the cause of unpleasant
side-effects, yields valid results about what would happen if that would not be
the dosage that would be used in practice.
And it is not clear why a pre-determined dosage, if it gives some
benefit that would be more helpful if increased, is proscribed when the
increased dosage level might actually be the effective dose. Even if having patient-centered dosages
requires more work to analyze the data, it still seems to be the more reasonable
way to conduct a trial. It is certainly not clear that saving time, labor, and
money to analyze irrelevant data is preferable to spending more time and effort
analyzing the relevant data. Doing the
wrong thing more easily and quickly should not be preferable to doing the right
thing more arduously.
[1]
This is often stated as a principle of respect for persons or a person’s right
to autonomy, but we believe the way we have stated it is more basic and more
intelligible. Moreover, the proposition
we have stated is not explained or justified by appeals to autonomy or respect
for persons because those usually just mean in that context that one ought not
to control people or make them do something against their will. Thus saying one should not control people
because they ought to have autonomy is merely to say one should not control
people because one ought not to control people. [2]
The “would be against their will” part is essential because one is exploiting
someone else if one knows one can talk them into something one knows they will
later regret and one knowingly and intentionally does it anyway. Similarly if one does something wrong to
someone who is asleep or unconscious or otherwise unaware you are doing
it. The point is not that they do not
disapprove now, but that they would disapprove if they knew what has been done
to them, or that they will disapprove when they discover what has been done to
them. [3]
Some authors have advanced the position that informed, valid, or educated
consent absolves physician researchers from having to treat their research
subjects (or patients to whom they recommend participating in research) in a
way that their oath requires them to treat patients, in particular the
requirement by oath not to cause or allow harm and to treat to the best of
their ability. The rationale is based on
respect for the patient’s autonomy and wishes (to participate or continue
participating in a research trial). The above, along with explanations and
arguments yet to be presented here, are meant to rebut that view. [4] It
is sometimes thought this ethical dilemma of having to choose withholding
treatment from some individuals in order to benefit others for the greater
general good is peculiar to physicians
as researchers because of these lines in their professional oath: Hippocrates version: I will
apply dietetic measures for the benefit of the sick according to my ability and
judgment; I will keep them from harm and injustice. Modern version 1: I will
apply, for the benefit of the sick, all measures [that] are required.... I will
prevent disease whenever I can.... But everyone is bound by ethical maxims not to harm
others unjustly and to help others where they reasonably can. For all intents and purposes any researcher
who is qualified to administer treatments, or who utilizes the services in the
study of those who can, would have as much moral obligation to do so as a
physician would. The moral dilemma would
thus apply to them as much as it would apply to a physician as researcher. [5]
The consequentialist/deontological dichotomy is an unnecessary one brought
about by trying to reduce ethics to some single or simplistic principle of what
is right and what is good. In everyday life, there are plenty of examples of
utilitarianism’s legitimately taking precedence over deontological principles,
and there are plenty of examples where utilitarianism falls short and fails to
legitimately override deontological principles.
For those interested in a full account of that, and a more reasonable principle
that incorporates the best of both theories see “An Introduction to Ethics” at www.akat.com/Ethics.html [6]
There is an interesting trap that is easy to fall into: when focusing on the
potential benefits of an experimental intervention, it is easy to think that
placebo arms of a treatment do a disservice to the subject/patient and violate
the physician’s oath to be beneficent.
So it is easy to oppose placebo controls when concentrating on the
potential benefits rather than the potential harms. But surely in some trials,
such as the hormone replacement treatment (HRT) clinical trials, where the
treatment proved fatal to too many women in the treatment arm, it is better to
have been in the control group. [7]
Fred Gifford has pointed out what he considers
to be other “types of equipoise” [Gifford, Fred (2000) 'Freedman's 'Clinical
Equipoise' and Sliding-Scale All-Dimensions-Considered Equipoise'', Journal of
Medicine and Philosophy, 25:4, 399 — 426].
While he raises important points, we do not think that subsuming them
under “types of equipoise” is the clearest or best way to address them, and we
will do it differently here where some of the concepts we discuss happen to
coincide with his. [8]
“Theoretical equipoise exists when, overall, the evidence on behalf of two
alternative treatment regimens is exactly balanced. […] Theoretical equipoise
is overwhelmingly fragile; that is, it is disturbed by a slight accretion of
evidence favoring one arm of the trial. In Chalmers’ view, equipoise is
disturbed when the odds that [treatment, or trial arm] A will be more
successful than [treatment, or trial arm] B are anything other than 50%. […] We
may say that theoretical equipoise is balanced on a knife’s edge.” (Benjamin
Freedman, “Equipoise and the Ethics of Clinical Research,” New England Journal of Medicine 317 (1987): 141-45.) Samuel Hellman and Deborah S. Hellman basically
reinforce this view when they argue that physician’s beliefs, including
hunches, based on uncontrolled evidence or experience are ethically sufficient
to proscribe the physician’s putting research subjects (or acquiescing to
patients becoming research subjects) in arms of trials that do not act on those
beliefs. (“Of Mice but Not Men: Problems of Randomized Clinical Trail,” New England Journal of Medicine 324
(1991): 1585-89.) While that is true in
some cases, it is not true in all. Some
beliefs are stronger than others and are based on better evidence than others. All that is necessary for morally relevant
equipoise to occur is that the physician has some reasonable doubt about the
belief. Hellman and Hellman are correct
that RCTs are not the only method of preventing such doubt, but that does not
mean that every belief based on anecdotal or personal experience outside of
clinical trials is unaccompanied by reasonable doubt. It is simply reasonable uncertainty that
matters morally in determining theoretical equipoise. [9]
“Significance for the patient” has to do with how important the potential
outcomes might be for the patient, which can vary from one individual to
another even though the quantitative magnitudes might be the same, or which can
vary even in the same individual at different times in his/her life. E.g., those with a high tolerance for pain
might eschew pain killers that put their thinking ability into a drug “fog” if
they do not like drug “fog”. Someone
with a low tolerance for pain who does not mind having his or her thinking
“dulled” might prefer the pain killer.
Or a quadriplegic with a possibly deteriorating condition might not want
to risk surgery that could end up with a worse outcome than the outcome of
deterioration, if the latter is acceptable and the former is not, particularly
if the outcome of successful surgery merely leaves him/her at the baseline
condition where s/he is now. The
baseline condition may or may not be significantly preferable to the worst
outcome. And at the same time, the best
outcome might not be sufficiently more personally valuable to the patient than
the outcome of the deterioration. In
regard to a research study, while the probabilities and magnitudes of potential
burdens and benefits may be the same for two different subjects, the value or
significance of the different possibilities may be strikingly different for
them – justifying participation of the one and not of the other. A common
American example of the difference between magnitude and significance is seen
in the difference in American football between trying a two-point scoring
conversion early in the game versus at the end of the game with no time left on
the clock and the team that has just scored a touchdown still trailing by two
points. In all situations except where point spreads are important (for
determining playoff or future draft choice opportunities, for example) or where
an individual record may be more important than team victories (e.g., most
point-after conversions by a kicker), while the probabilities and magnitudes of
the different attempts are the same early and late in the game in the case
described, the significance of the magnitude – the two points, as opposed to
the one point-- is very different.
Similarly, if someone who is likely to die before a certain important
event they would like to see (e.g., the birth of a grandchild), an experimental
treatment that might prolong their life by a few days or shorten it by a few
days may have different significances. If they are expected to die before the
date that is important to them, the few days of extra life the treatment might
afford them could be very significant, and the time the treatment would cut off
if it fails might not be significant to them.
On the other hand if they are expected to die after the date that is
important to them, there is more negative significance attached to the
treatment’s failure than positive significance attached to its success. The magnitude in terms of days lost or gained
is the same, but the significance of the magnitude is different. The concept we are referring to as
“significance” is very important in decision-making, and we will show it is
important also in trying to prevent exploitation of research subjects. [10]
Benjamin Freedman, for example, points out that you cannot merely “subtract”
the placebo data from the experimental data to arrive at a supposed “net
therapeutic advantage” of the treatment. Moreover, he contends that even if you
could, that might have significance for pure research, but not for clinical
practice. See his “Placebo-Controlled
Trials and the Logic of Clinical Purpose,” IRB:
A Review of Human Subjects Research 12, no. 6 (1990): 1-6. [11]
Examples of this in educational research occur in those studies that have
“shown” class size and length of school day to be the most causally relevant
factors related to student learning.
When controlled for other variables thought to have been important
(e.g., parents’ level of education, social/economic status, gender, ethnicity,
etc.) in large scale research groups, all other factors cancel out. But in none of these studies is quality of
teaching a factor, and it is presumably proportionally equally distributed
between control and experimental groups (or between different experimental
groups, depending on how the study is conceived). If, as would seem reasonable, that quality of
teaching matters, that fact is hidden by the aggregate results. But it is difficult to imagine that students
who have good teachers for half a day will learn less than students who have
bad teachers for a full day, or that those students in large classes with
excellent teachers will learn less than students in smaller classes with bad
teachers. Even as anyone knows who has
ever been individually tutored at length by a bad teacher, the personal
attention and duration of time is of little avail. Large scale studies, where
the actual causal factors are not known or suspected and not properly “tested
for,” can obscure those causal factors. [12]
E.g., in some cases, a 20% success rate may not mean that a treatment is 80%
ineffective but that it is 100% effective under conditions we don’t yet know
and 100% ineffective absent those conditions.
If Einstein was right to believe that metaphorically “God does not play dice with the universe”,
then probabilities are more a sign of lack of knowledge of specific causes than
they are a sign that something is causally effective only a certain percentage
of the time. [13]
E.g., see Paul S. Appelbaum, Loren H. Roth, Charles W. Lidz, Paul Benson, and
William Winslade, “False Hopes and Best Data: Consent to Research and the
Therapeutic Misconception,” Hastings
Center Report 17, no. 2 (1987): 20-24. [14]
Franz J. Ingelfinger, “Informed (But Uneducated) Consent,” New England Journal of Medicine 287 (1972): 465-66. [15]
Jay Katz, “Human Experimentation and Human Rights,” St. Louis University Law Journal 38 (1993): 7-54, [16] They write “We believe that as with clinical
care, in the case of many randomized, controlled trials, the patient’s
participation can and should be considered to be authorized by his or her
general consent for treatment and that specific consent should not be
required.” [17] Rui
Wang, Stephen W. Lagakos, James H. Ware, David J. Hunter, Jeffrey M. Drazen;
“Statistics in Medicine – Reporting of Subgroup Analyses in Clinical Trials,” The New England Journal of Medicine
357;21, Nov. 22, 2007: p. 2194. [18] What
happens in any formal system is that rules and regulations are utilized or
developed which seem to be fair, which seem to be necessary and useful, and
which seem to capture the spirit and purpose of the endeavor being established.
Unfortunately not all circumstances or applications of the rules can be
anticipated, and three different kinds of problems will typically arise in any
formal, procedural system. (1) The loophole problem -- instances will arise
that meet the letter of the rules but which will appear at least to some people
to violate their spirit or the purpose of the enterprise. In sports, an example
of that was Dean Smith’s University of North Carolina invention of the “Four
Corners Offense” which essentially that year turned their college basketball
games into games of sophisticated “keep away”. (2) Instances will arise where
even sincere and faithful adherence to the letter and the spirit of the rules
leads to an unanticipated, undesirable result because the rules will be
incomplete, contradictory, or because one or more of them will not have
accurately captured what was intended or what was unconsciously understood. (3)
Unanticipated circumstances will arise for which the rules are inadequate or
antiquated, yielding undesirable results. An example of the this last case
arose in American football when kickers became so strong that kicking the ball
off from the 40 yard line virtually eliminated "kick-off returns" and
their potential excitement, because kick-offs routinely went beyond the field
of play, for a touchback. The American collegiate and professional football
experience with "instant replay" checking of disputed referee calls
is a good example of the last part of the second kind of flaw. Instant video
replay analysis of the accuracy of on-field officials' calls was instituted
because from time to time during football games officials made mistaken calls
that were obvious and important. But the procedures instituted to detect and
correct the errors did not address the real problem. This is true of the
collegiate experience with rules governing instant replay official review and
with both of the, so far two, NFL attempts. Coaches were allowed to request
replay reviews, and play was held up while the video was analyzed. This slowed
down the game so much and did not make sufficient difference most of the time
to warrant the time and trouble, so "instant replay" officiating
review was dropped. But, of course, this left the original problem that
"instant replay" analysis was intended to prevent. It is my view that
the way the original instant replay was conceived and formalized is where they
went wrong. The original, and current problem, is that some calls in football
are not only wrong and not only important about the outcome of the game, but
they are also seen that way by all the fans watching the game on tv, and those
who see the replay in the stadium on a large screen. Those calls are the only
ones that need to be reversed -- the obvious and egregiously wrong calls that
everyone watching tv sees. A referee who could stop the game and reverse such
calls could simply be stationed in a booth with a tv, acting when, and only
when, an obviously wrong call was made. That would not slow the game beyond
what is both necessary and acceptable to fans. Fans do not expect perfect referee
judgment about every difficult or close call; what they do not want is for
really terrible calls to be made that are obvious to everyone but the referees.
Understanding that would help football administrators develop better policies
for instant replay review than they have so far. Instead the current instant
replay rule in professional football requires that coaches make a challenge,
and give up a time-out if the challenged call is upheld. Plus, play has to be
stopped and the referee who made the call has to do the review -- sometimes
allowing him to repeat the error if it involves misinterpretation or
misunderstanding of a rule. And coaches only get two challenges a period, so if
there are four bad calls during a period, at least two of them will be ignored.
"Instant replay challenges" then, as they are formally instantiated,
do not reflect the intention and point of allowing all and only obviously
egregious calls to be overridden more or less automatically whenever they
occur. [19]
Levine has pointed out different instances of medical research or practice that
benefits others instead of the research subject, such as organ donor research, quarantine, etc. But even in those cases, the subject rightly
expects minimal harm. And in some of
those cases there is benefit to the subject in the form of his or her helping a
loved one whose life and well-being is dear to him or her. There is also, we think, benefit in
contributing to one’s society’s being one in which people help each other out
when they can. That is an indirect
personal benefit of helping others and setting an example; and, of course it
does not always “come back” to one, but it at least might help, and it makes
one deserving of future benefit in time of need, even if one does not receive
it. [20]
Robert J. Levine, Ethics and Regulation
of Clinical Research, 2nd edition (Baltimore: Urban and
Schwarzenberg, 1986), 3-10. References: [i]
Whyte J: Treatments to enhance recovery from the vegetative and minimally conscious
states: ethical issues surrounding efficacy studies. AM J Phys Med Rehabil
2007;86:86-92 [ii] Franklin
G. Miller and Howard Brody, “A Critique of Clinical Equipoise: Therapeutic
Misconception in the Ethics of Clinical Trials,” Hastings Center Report 33, no. 3 (2003): 19-28. [iii]
Robert D. Truog, Walter Robinson, Adrienne Randolph, and Alan Morris, “Is
Informed Consent Always Necessary for Randomized, Controlled Trials?” New England Journal of Medicine 340
(1999): 804-7. [iv]
U.S. Department of Health and Human Services, Nation Institutes of Health, and
Office for Human Research Protections, The Common Rule, Title 45 (Public
Welfare), Code of Federal Regulations, Part 46 (Protection of Human Subjects). [v]
The Council for International Organizations of Medical Sciences (CIOMS) in
collaboration with the World Health Organization (WHO) (Geneva: 2002). [vi]
International Conference on Harmonisation of Technical Requirements for
Registration of Pharmaceuticals for Human Use, ICH Harmonised Tripartite
Guideline – Guideline for Good Clinical Practice (ICH-GCP Guideline) (Geneva:
1996). [vii]
Garlikov, R. and Jackson, A, “Introduction to Perspectives on Ethical Issues
and Dilemmas in the Treatment of Patients with Spinal Cord Injury,” Topics in Spinal Cord Injury Rehabilitation
(2008); 13 (3):1-17. |