Download The Application of Expert Systems in the Clinical Laboratory

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Ecological interface design wikipedia, lookup

Collaborative information seeking wikipedia, lookup

Computer Go wikipedia, lookup

History of artificial intelligence wikipedia, lookup

Human–computer interaction wikipedia, lookup

Time series wikipedia, lookup

Knowledge representation and reasoning wikipedia, lookup

Personal knowledge base wikipedia, lookup

AI winter wikipedia, lookup

Incomplete Nature wikipedia, lookup

Clinical decision support system wikipedia, lookup

Transcript
CLIN. CHEM. 35/8, 1595-1600
(1989)
The Application of Expert Systems in the Clinical Laboratory
Per WInkel
An “expert system” consists of a knowledge base containing
information of a general nature and an inference system that
receives data from the user and applies the knowledge base
to produce advice and explanations. An expert system
stripped of its knowledge base (a tool) may be used to build
new expert systems. Existing systems relevant for laboratory
medicine are reviewed. The role in the laboratory of expert
systems and their integration and evaluation are discussed.
In this paper I will review and discuss a particular
approach to computer-supported
medical
decision making
(CMD), namely the use of the so-called expert systems.
Other approaches to CM]) include the use of clinical algorithms (1,2), probabilistic reasoning
[application of Bayes
theorem, decision analysis (3-7)1, and biodynamic modeling (8). These latter techniques,
although useful in many
contexts where one needs computational
power to solve
well-defined
problems, also have some limitations.
They
each assume as a starting point a limited set of well-defined
hypotheses
and cannot deal with more ill-structured
diagnostic problems.
Medical
knowledge
and heuristics
for
problem solving are never explicitly expressed. Furthermore, traditional
programs lack conversational
capabilities
and the ability to explain the basis for their recommendations in terms that are understandable
to the physician.
Also, they usually must have a complete data set.
For these reasons, several researchers
in artificial intelligence (Al) began to apply their techniques
to develop
systems for computer-aided
medical
decision making. Actually, more than 10 years ago the topic of Al was discussed
within the context of using laboratory data in the decisionmaking process (9). Al emphasizes computer applications
that involve the processing of symbolic information.
The
types of Al computer
programs intended to assist in the
clinical decision-making
process are often referred to as
expert systems.
The General Structure of an Expert System
Certain generalizations
may be made about the nature
and structure
of expert systems (see Figure 1). The key
component is the knowledge base. The knowledge base
consists of facts such as those that can be found in a
textbook,
beliefs, and heuristic
knowledge.
Heuristic
knowledge is the knowledge of good practice and judgement, i.e., the experiential
knowledge that a skilled expert
acquires over the years.
This information
is difficult for the expert to make
explicit.
To help, he or she is assisted
by a so-called
knowledge engineer, who knows how to represent knowl-
Department of Clinical Chemistry, University Hospital of Copenhagen, Blegdamsvej 9, DK-2100 Copenhagen, Denmark.
Received February 6, 1989; accepted April 14, 1989.
ExerI
Uses
Knowledge
enginee’
Fig. 1. Basic structureof an expert system
Three subjectsorgroupsare Involved In the constructionand useof an expert
system:the expert,who enters knowledgevia knowledgeacquisition facilities
intothe knowledgebase;the knowledgeengineer, who assists him and also
knowshow to construct a program(the Inferencesystem) which reasons on
the basis of knowledge;and the userof the system,who communicates with
thesystemws theinput/outputsystem.Reproducedfrom (10), with permission
edge in a computer and construct a program that reasons
on the basis of knowledge.
A knowledge base is different from a database.
The
knowledge base contains information
of a general nature,
whereas the database
typically contains the individual
patient’s records. In addition to the knowledge base, an
expert system needs an inference system that receives
specific facts and data from the user and reasons on the
basis of this information and the knowledge base to produce
advice and explanations to the user.
Here I will review the basic principles
of knowledge
representation and design of inference systems that have
been applied within the field of medical expert systems.
Knowledge Representation and Inference Procedures
Two different
approaches to represent knowledge in
expert systems have been followed. One approach is based
on a representation of the knowledge in the form of rules
(production rules, IF-THEN rules), which are applied in an
orderly manner to produce the solution of a diagnostic
problem. By the other approach, diseases and clinical,
physiological, or pathophysiological
states are each characterized by a stored pattern of anticipated findings to be
matched with the clinical findings observed in the patient.
These patterns, referred to as nodes or frames (see 11 for
technical details about nodes and frames), are interconnected by links representing
various relations and dependencies. Such networks of nodes and frames are excellent to
represent hierarchical
knowledge structures.
Newer programs (second-generation
programs) are em-
CLINICAL CHEMISTRY, Vol. 35, No. 8, 1989 1595
ploying networks where the use of causal pathophysiological knowledge is emphasized. A typical feature of these
programs is a taxonomic structuring
of the nodes and links
that allows diagnostic and other types of reasoning
to
proceed at various levels of detail.
Production-Rule-Based
Systems
Knowledge may be represented
as a collection of conditional sentences referred to as “production rules.” A production rule consists of a set of preconditions, referred to as
the premise, and an action part. lithe premise is true, the
conclusion in the action part is justified. A prominent
example of a system based on the production rule formalism is the MYCIN system (12), which provides consultation
about infectious disease, diagnosis, and choice of therapy.
An example of a rule from this system is as follows: IF
(1) the Gram stain of the organism is Gram negative, and
(2) the morphology of the organism is rod, and
(3) the aerobicity of the organism is anaerobic,
THEN there is suggestive evidence (0.6) that the genus of
the organism is Bacteroides.
Each conclusion includes a certainty factor (CF) ranging
from 1 (complete belief) via 0 (nothing known) to -1
(complete disbelief), and each assertation is associated with
a CF. If the CF on a premise is positive above a certain
threshold (0.2), the corresponding conclusion is drawn with
a certainty that is equal to the premise’s CF times the
conclusion’s own CF. Evidence confirming a hypothesis is
collected separately from that which disconfirms it, and the
truth of a hypothesis is the sum of the evidence for and
against the same hypothesis. Other approaches for the
handling of incorrect or uncertain data have been proposed
(11, 13). MYCIN knows that its goal is to undertake a
number of tasks such as (e.g.) to determine if there is a
substantial infection in the patient. To accomplish a goal,
MYCIN evaluates
all rules relevant for this goal. This
creates a need to evaluate the premises of these rules, and
these then become new subgoals, which are treated in the
same way-i.e.,
the rules relevant for these subgoals are
evaluated. When no rules are found that apply to a subgoal,
the user is asked to supply the needed information. This
way of searching for a solution is referred to as “backwards
chaining.” A “forward chaining” or “data driven” approach
is used when the patient’s data are entered without guidance by the computer. Those rules whose premises match
the data are then applied, and new rules that use the
conclusions in their premise conditions are subsequently
applied, etc. Instead of using one of the two strategies, it is
also possible to combine them into a mixed strategy.
Expert Systems with Semantic Networks and Frames
In addition to production rules, semantic networks or
frames may be used to represent knowledge. This knowledge representation
has been used in a number of expert
systems such as iNTERNIST-i (14), which is designed to
handle complex cases within the domain of general internal medicine, where several disease processes may be
present. The IN’rERNIS’r-i program compares the patient’s
data entered initially to the frames in its knowledge base,
establishes a hypothesis about the diagnosis, and pursues
this hypothesis by asking questions about the patient. On
the basis of the information
gained it may reject the
hypothesis and establish a new one, or it may suggest a
diagnosis.
If symptoms of sufficient importance
remain
unexplained
after IN’raaNIST-i has made a diagnosis, it
1596
CLINICAL CHEMISTRY, Vol. 35, No. 8, 1989
proceeds to establish additional diagnoses until all important findings have been explained.
In the cASNE’r/glaucoma program (15) the knowledge
about glaucoma is represented
as a network of pathophysiological states (detailed description of physiological dysfunction) and causal links between states. Diseases are
described as possible patterns of causally related states.
During a consultation, the individual states are first confirmed or denied on the basis of observations. Then the
pattern of states is matched with those described for the
various diseases. The disease that provides the best match
and the most parsimonious interpretation is suggested as
the diagnosis.
One of the more sophisticated second-generation
systems
is the ABEL (16), a program for use in the domain of
acid-base and electrolyte disorders. Its knowledge base
consists of clinical states and causal relations at multiple
levels of details, the most detailed level dealing explicitly
with stores of electrolytes in various body compartments
and their transport between compartments. The program
allows the building of composite hypotheses (patient-specific models) of possible diseases. Because this is done at
multiple levels of detail, the program allows the reasoning
processes to be carried out at a level that corresponds to the
body of data available.
The various existing expert systems have been reviewed
(13, 17-22).
Expert System Tools
An expert system may be built from scratch, with any
major programming
language being used. Usually, an Al
language (usP or Prolog) or one of the structured languages
such as Pascal or C is used. This, however, represents a
major research undertaking.
Fortunately,
it is also possible
to create an expert system without programming
if an
expert-system
tool (or more simply, tool) is used. A tool may
be defined as any piece of software intended to help design,
deliver, or maintain an expert system (23). One of the first
tools (EMYcmI) was developed from MYCIN (24). EMYCIN is
MYCIN without its knowledge
base; i.e., the inference engine and other programs are retained.
When an expert
system is built from EMYCIN, the knowledge base is entered
in the form of production rules. Another early rule-based
tool is EXPERT (25) derived from CASNET, an expert system
for diagnosis and treatment of glaucoma (15). Today, several tools are available for a wide range of computers. Most
are nile-based. In some of them (induction systems), examples in the form of matrices comprising attributes and
corresponding
outcomes are entered by the user and the
rules derived from this material by the machine (11). In
other systems, the rules are entered directly. A purely
nile-based
system has certain limitations in that the design knowledge, consisting of structural
and strategic concepts, is implicitly present in the rules (26,27). This makes
it difficult to design and understand the sequence of events
that takes place during inference. Some of the newer
rule-based systems, such as NEOMYCIN, have been augmented to solve this problem (28). Tools designed for other
types of knowledge bases, such as networks of frames, are
also available, but they are less common. There are extensive catalogs of tools (29, 30).
Design Considerations
A stand-alone expert system is one that runs by itself and
fully occupies its host computer. Most current expert sys-
tems are such. However, it may not be meaningful
to
introduce an expert system into the laboratory
or elsewhere unless it is integrated
with the Laboratory
(or
Hospital) Information
System at some level. It is needlessly
cumbersome
to use manual procedures to enter data into
the expert system if these data already are present on disk
in the Laboratory Information System (US), but it may not
be easy to integrate an expert system with an information
system. Kwa et al. (31) integrated
a relatively simple
rule-based expert system with an information system. They
felt a need to define a special command language, consisting of commands
for both the expert system and the
information
system. A rather elegant integrated
system
was developed by Groth and Hakman (32, 33). They connected a prototype menu-driven
PC intelligent
workstation
with an LIS based on the MIMER database management
system (34). In addition to allowing the clinician to display
data graphically
and to perform statisticalas well as
model-based time series analysis, the workstation also
allows the clinician to access an expert system, using
knowledge bases worked out in collaboration between the
clinic and the laboratory. A renowned information system
for health care that integrates decision-making
systems
with a common database is the HELP system (35). The core
of the HELP system is an on-line clinical database. This
database
is complemented
by a computerized medicaldecision support system comprising a set of modular decision criteria and a program for processing this logic.
The introduction
of international
data-transfer
standards (36) would help define minimum requirements that
vendors of expert system tools must meet to facilitate
integration. If this were done, one might require that the
following demands be at least met by a tool: (a) it should be
able to read pertinent data dumped by the US on a disk in
a standardized
format, and (b) it should be able to dump its
conclusions on a disk by using the same standardized
format.
Classification
of Expert Systems According to Domain
and Type of Probiems Solved within the Domain
Table 1 lists some examples of expert systems emphasizing the use of laboratory data. The majority of systems are
diagnostic
systems
that
output
a diagnosis
and, possibly,
suggestions for treatment. In addition to diagnoses, some
systems such as the ABEL (16) make predictions, e.g., about
possible effects of different therapeutic
interventions.
Of particular relevance for the clinical pathologist
is the
planning involved in the patient workup, because this has
a bearing on the efficient utilization of the laboratory
services. PHEO-ArFENDING is an expert system that assists
the physician in this planning (42). It is designed to critique
a physician’s workup of a patient with suspected pheochromocytoma. In contrast to more traditional
systems, this
system first asks the physician to describe his (or her)
patient and to outline the approach planned. The system
then critiques that plan to help the physician make the
workup as rational and efficient as possible. A recommended sequence of work-up is built into the system’s
knowledge base, which is organized as expressive frames
associated with each test or procedure. Each frame contains
a list of comments that may be output in discussing the use
of the test or procedure. Each comment has an associated
condition that indicates when it is to be output as part of
the critique. For instance, if a CT scan is ordered without
prior screening tests, a comment in the CT scan frame
Table 1. Some Expert Systems Emphasizing the Use of
Laboratory Data
Reference
System
reference
ABEL
DomaIn
Acid-base
electrolytes
Use
Diagnosis,
16
prediction
ANEMIA
Anemia
Diagnosis
Consult-I
Anemia
Diagnosis
RED
Diagnosis
histopathology
Leukemia
Diagnosis
Erythrocyte
Diagnosis
PHEO-
Pheochromo-
37
38
39
PAThFINDER Lymph-node
EMYCIN
no.
40
41
antibodies
Critiqueof patient
AUENDING
work-up
cytoma
EXPERT
Serum proteins Diagnosis
43
44
Diagnosis
EXPERT
Lipoprotein
metabolism
x-ray anal. of
renal stones
Outpatient
LIVER
SMR
testing
100 dIseases
Multiple
test requests
Diagnosis
Diagnosis,
PRO.M.O.
LIThOS
42
Diagnosis (of stone
content)
45
Diagnosis,
46
planning
47
48
interpretive
comments
suggests that these should be ordered first and that a CT
scan is not indicated. The various comments generated by
activating
the various frames during a consultation
are
combined into a smooth narrative text, which is then
output as a final critique. Van Lente et al. (46) used the
EXPERT system (25) for sequential
laboratory
testing and
interpretation
in an outpatient
setting. Abnormalities
in
the initial test proffle initiate a sequential
laboratory
testing of the specimen already collected. The incremental
charge to a patient evaluated
by this sequential
testing
program was only slightly more than the phlebotomy
charge that would be incurred if one additional specimen
were required for the physician to investigate an abnormality independently.
Some expert systems have been limited to instruments.
Weiss and Kulikowski
used the EXPERT building tool (25)
to construct a system that interprets serum electrophoresis
patterns (43). This was later incorporated
in a densitometer. Wulkan and Leijnse (45) reported a system in which a
PC connected to an x-ray diffraction analyzer system preprocesses the difiIaction data on-line and stores selected
data in a LISP format. These data are subsequently interpreted by a rule-based expert system umos, which reports
the components and relative content of renal stones. The
system was developed because of a shortage of the specialists needed to interpret diffractograms.
Evaluation of Expert Systems and Expert-System Tools
The evaluation problems related to expert systems are
many and difficult, and much research is needed before
they can be resolved.
In discussing the issue of evaluation, it is important to
make a clear distinction between the evaluation of the tool
selected for a particular problem and the evaluation of the
expert system it has been used to build (23).
Miller (49) recently discussed the problems related to the
evaluation
of an expert system. He distinguished
three
CLINICAL CHEMISTRY, Vol. 35, No. 8, 1989 1597
different levels of evaluation of expert systems: (a) the
subjective evaluation
of the research contribution of a
development prototype (i.e., does it provide new problems,
solutions
of problems,
tools, or perhaps illustrative
mistakes?); (b) the validation of a system’s knowledge and
performance;
and (c) the evaluation of the clinical efficacy
of an operational system. The validation of a system’s
knowledge includes an assessment of the accuracy, completeness, and consistency of the knowledge and the performance of the system-i.e.,
does it mirror the expert’s
performance?
A critical aspect related to the system’s
performance
is its ability to know its own limitation. That
is, if a system is presented with an unfamiliar
case, it
should fail to make a diagnosis. Some of the diagnostic
systems that have been tested in comparison with various
experts have actually performed quite remarkably (14,43,
50,51). For instance, when eight evaluators
evaluated each
others’ performance
as well as that of MYCIN, none of the
evaluators achieved better approval than MYCIN (70%) (50).
Quaglim and Stefanelli (51) had six expert hematologists
from different countries evaluate the diagnostic accuracy
as well as quality of medical reasoning of their colleagues
and the expert system ANEMIA (37). The evaluation was
done blindly. ANEMIA’s performance was not significantly
different from that of the experts. Aa1Lk’s and the experts’
diagnostic performance was rated acceptable in 87% and
90% of the 30 cases, respectively. The corresponding results
for medical reasoning were 87% and 80%. An interesting
approach that has been implemented
in some systems is to
have them assist in the process of knowledge validation by
searching their own knowledge bases for inconsistencies or
incompleteness
or by facilitating inspection of their knowledge bases (52, 53).
The evaluation of an operational system depends on the
nature of the system, the domain, and the clinical role it is
asked to play. Issues of interest include an assessment of
the physician’s (modified?) behavior, the impact of the
system on quality of patient care, the patient’s health, and
the overall health-care process. The economic cost effectiveness, the user’s subjective reaction to the system, the user
interface, the use of the system (does the physician find it
useful and continue to use it?), and peer reviews are also
important measures. Within the laboratory, the impact on
the total laboratory operation should be evaluated-i.e.,
the impact on test-turnaround
time, test workload, staffing,
hardware and software purchase, and maintenance
of the
quality of the laboratory’s
consultative
role. There are
relatively few evaluations of operational systems. Evaluations have mainly been done in developmental systems.
The evaluation of a tool poses some difficult problems.
The appropriate tool can be chosen only after the requirements imposed by the problem are understood.
Expert
systems, however, are appropriate in instances where the
problem domain is complex. Therefore, understanding
these requirements
may be possible only after experimenting with several prototypes.
This suggests
the following strategy:
For the early
phases of conceptualization
and prototyping,
a flexible tool
is chosen that allows rapid development, elicits different
approaches
and representations,
and allows the user
quickly to try alternative implementations.
On the basis of
what was learned during prototyping, a tool is then chosen,
which may very well be a different one. In a report from the
RAND Corporation, the evaluation of expert-system tools
is discussed
(23). Five development
phases are distin1598
CLINICAL CHEMISTRY, Vol. 35, No. 8, 1989
guished: exploration,
prototyping,
development, delivery,
and operation. The tool requirements
vary somewhat with
the phases. Flexibility
is important
during the initial
phases. Extensibility
and vendor support are particularly
important
issues during prototyping
and development.
Later on, the user of the tool assumes the responsibility of
maintenance
relative to the end-user of the target expert
system. Efficiency, which includes speed of response and
utilization of computational
and memory resources, becomes very important
when the target system is operational. The clarity-i.e.,
the ease of understanding
and
using the tool-is
important
during all phases. Costs include the purchase price and support costs of the tool,
resources consumed (person power, machinery, supplies,
computation used, elapsed time, etc), hidden costs such as
costs of training and integration. For the finished system,
costs and efforts are involved in maintaining
the system.
The overall costs anticipated when a project is started
should be weighed against the expected lifetime of the
system and the potential benefits of such a system. Costs
are important through all phases, but they are particularly
important during the transitions
between phases, because
here we have to decide if we should switch tools or stay with
what we have-or
perhaps close the project.
The proper choice of tool clearly depends very much on
the domain and problems to be solved and the expertise
available.
The target environment
may impose several
constraints-in
particular,
the need to integrate the tool
with existing hardware and software. Also, the characteristics of the expected end-user (pathologist or clinician)
determine user-interface
and explanation requirements
for
the target system. In my opinion, the following requirements should at least be fulfilled by a tool during all
phases, except perhaps the earliest ones: (a) The tool should
link to the local LIS or HIS, a graphics program, and a
suitable statistical package. (b) It should support both datadriven as well as goal-driven reasoning. (c) It should have
computational capabilities.
The Role of Expert Systems in the Clinical Laboratory
A recent survey of 102 individuals,
mainly in academic
clinical laboratory settings who had expressed an interest
in computers, indicated that 24% were involved in developing knowledge-based
systems, with most systems being
at an early stage of development (54). Thus, there is a
considerable interest in these systems. However, do they
have a role in the clinical laboratory and, if so, what is that
role?
In most cases, patjeit
management
requires the combined use of data from a variety of sources, including the
laboratory. Therefore, as pointed out by Korpmann (55), it
makes a lot of sense to establish a single database of all the
patient data available, such that these data are available
wherever patient care is rendered or patient-care
decisions
are made. Within this model the clinical pathologist is an
active contributor of data to the common database and not
simply an end-user of an expert system utilizing these data.
As one would expect, studies (56, 57) have shown that the
performance of expert systems deteriorates when they are
confined to laboratory data instead of incorporating
all
clinical data, including the laboratory data. Although the
clinical pathologist should not be an end-user of these
expert systems, he/she ought to play a significant
role
during their development because he/she is an important
contributor
of expert knowledge,
not the least when it
comes to sensible planning of the request for laboratory
services during patient workup and monitoring.
It is not really meaningful
to locate an expert system in
the laboratory unless the data of the domain are truly
confined to those data produced by the laboratory. In other
words, the problems addressed by expert systems located in
the laboratory should be those concerned with the preprocessing of laboratory-generated
data before they are transmitted to the common patient database.
The types of problems suitable for an expert system are
problems that require expert knowledge if they are to be
solved. Thus, the problems should be difficult ones. On the
other hand, they should not be too difficult. A solution that
requires between 10 min and 3 h of a clinician’s time has
been suggested
as a good expert-system
working range
(11). It is also recommended
that the domain knowledge be
narrow and precise and the nature of the knowledge be
such that it can be represented
in symbolic form. The most
useful knowledge is the heuristic type gained from years of
experience in practical problem solving. Many of the problems that are confined mainly to laboratory
data and
therefore should be solved by the clinical pathologist
are
either rather simple or they require a lot of computations
and, ideally, the use of specific mathematical,
biochemical
models (8,58,59)-in
my opinion at least. Thus, tools other
than expert systems may be more appropriate for most of
those problems that require the expertise of the clinical
pathologist alone.
Furthermore,
routine cases do not require much expertise and the processing of a case by an expert system may
consume considerable resources, so it seems important that
some kind of computerized
screening mechanism
is combined with the use of an expert system such that only the
difficult cases are handed over to the expert system (55).
Potential Benefits and Pitfalls of Expert Systems and
Other Computer-Assisted Systems for MedIcal
Decision Making
There is a need for dealing optimally with the medical
knowledge explosion of the last 20 to 30 years. The inability of physicians to deal with this knowledge is evidenced
by several studies (60, 61). One example is the demonstration that <5% of the laboratory test data produced are
actually used (62,63). Computer-assisted
decision-making
(CDM) systems may be the solution to this problem. This
contention is supported by the fact that some systems have
actually
resulted
in tangible
improvements
in clinical
performance (64).
It does not necessarily
follow from this that CDM systems help in reducing the gathering of unnecessary information. Thus, the CDM systems may use all the data
without a true need for the information. The pros and cons
of CDM systems were discussed in a recent paper by de
Dombal and Hilden (65). The potential benefits of CDM
systems in the laboratory are improved speed and consistency in the data interpretation.
Computer-assisted
analysis of data to separate the uninteresting cases from the few
interesting
ones, and well-arranged
presentation
of the
latter, may improve the consultative role of the laboratory
director and the teaching
capability of the laboratory.
Automated reduction and interpretation
of a large number
of data items cumulated
in a given patient’s file may
improve and add to the diagnostic capability in the laboratory.
However, there are some pitfalls. Even though the need
for the CDM systems to be able to explain and justify their
suggestions is emphasized, there is still a real danger that
the reasoning behind the computer’s recommendations
will
remain implicit and concealed from the user, the result
being that the CDM systems may lend authority to shallow
knowledge and prejudices. In fact, what used to be easy
may become difficult if simple and straightforward
issues
are dealt with by complicated expert-system
tools. Furthermore, unless we define reasonable protocols for evaluation
of new systems, one may expect industry
to flood the
market with poorly conceived systems. Assuming that the
medical quality and the transparency
of these systems are
of sufficient quality, we are still left with some very difficult
problems pertaining to the transfer of the systems between
technical environments
(moving systems from one machine
to another) and between medical environments.
The fact
that medical science is developing very quickly implies
that the lifetime of a CDM system, before it becomes
obsolete, may be quite short. This may create a temptation
to retain outdated systems.
Development of transparent
systems that can explain
their reasoning (i.e., the assumptions
made by the systems
become explicit) and that may be transferred
between
technical
and medical environments
and easily updated
and modified by the end-user represents
a real challenge.
I thank Dr. B. E. Statland, for constructive and valuable criticism and for calling my attention to very useful references, and Dr.
J. Hilden, for a very inspiring lecture on benefits and pitfalls of
computer-assisted
medical-decision making.
References
1 Burke MD. Clinical problem solving and laboratory investigation: contributions to laboratory medicine. In: Stefani M, Benson
ES, eds. Progr Clin Pathol 1981;8:1-24.
2. Lundberg G. Using the clinical laboratory in medical decision
making. Abstracts, meeting of the Am Soc Clin Pathol. Chicago:
ASCP, 1983.
3. Winkel P. The multivariate approach. Chapter 3.4 in: Curtius
HC, Roth M, eds. Clinical biochemistry. Principles, methods,
applications. Data presentation and interpretation. New York:
Walter de Gruyter & Co. (in press).
4 Beck JR., Meier FA, Rawnsley HM. Mathematical approaches
to the analysis of laboratory data. Progr Clin Pathol 1981;8:67100.
5. Szolovits P, Pauker SO. Categorical and probabilistic reasoning
in medical diagnosis. Artiflntell
1978;11:115-44.
6. Galen RS, Gambino SR. Beyond normality: the predictive value
and efficiency of medical diagnosis. New York: John Wiley & Sons,
Inc., 1975.
7. Weinstein MC, Fineberg HV. Clinical decision analysis. Philadelphia: WB Saunders Co., 1980.
8. Groth T. The role of formal biodynamic models in laboratory
medicine. In: Groth T, de Verdier C-H, Benson, eds. Optimized use
of clinical laboratory data. Helsinki, Finland: The Nordic Clinical
Chemistry Project (NORDKEM). Scand J Clin Lab Invest
1984;44(Suppl 171):175-92.
9. Statland BE, Bauer S, eds. Computer-assisted decision making
using clinical and paraclinical laboratory data. Tarrytown, NY:
Mediad Incorporated, 1980.
10. Feigenbaum EA, Mc Cordack P. The fifth generation: artificial
intelligence and Japan’s computer challenge to the world. London:
Michael Joseph, 1984.
11. Frenzel LE. Understanding expert systems. Indianapolis:
Howard W. Sams and Company, 1987.
12. Shortliffe EH. Computer-based medical consultations:
MYCIN. New York: Elsevier, 1976.
13. Sandell HSH, Bourne JR. Expert systems in medicine: a
biomedical engineering
perspective.
Crit Rev Biomed Eng
1985;12:95-129.
14. Miller RA, Pople HE, Myers JD. INTERNIST-i, an experimental
computer-based diagnostic consultant for general internal mediCLINICAL CHEMISTRY, Vol. 35, No. 8, 1989 1599
cine. N Engl J Med 1982;307:468-76.
15. Weiss SM, Kulikowski CA, Amarel 5, Safir A. A model-based
method for computer-aided medical decision-making. Artif Intell
1978;11:145-72.
16. Patil RB, Szolovits P, Schwartz
WE. Modelling knowledge of
the patient in acid-base and electrolyte disorders. In: Szolovita P,
ed. Artificial intelligence in medicine. Boulder, Colorado: Westview Press, 1982:191-226.
17. Shortliffe EH, Buchanan BG, Feigenbaum EA. Knowledge
engineering for medical decision making: a review of computerbased clinical decision aids. Proc IEEE 1979;67:1207-24.
18. Steflk M, Aikins J, Baizer R, et al. The organization of expert
systems, a tutorial. Artif Intell 1982;18:135-73.
19. Duda RO, Shortliffe EH. Expert systems research. Science
1983;220:261-8.
20. Szolovits P, Patil RB, Schwartz WE. Artificial intelligence in
medical diagnosis. Ann Intern Med 1988;108:80-7.
21. Barr A, Feigenbaum EA, eda. The handbook of artificial
intelligence, Vol. 1. Los Altos, CA: William Kaufmann Inc., 1981.
22. Clansey WJ, Shortliffe EH, eds. Readings in medical artificial
intelligence: the first decade. Reading, MA: Addison-Wesley, 1984.
23. Rothenberg J, Paul J, Kameny I, Kippa JR, Swenson M.
Evaluating expert system tools. A framework and methodology.
Santa Monica, CA: The RAND Corporation, 1987.
24. Van Melle W, Shortllffe EH, Buchanan BG. EMYCIN: a domain-independent system that aids in constructing knowledgebased consultation programs. Mach Intell, Infotech State of the Art
Report 1981; Series 9, No. 3.
25. Weiss SM, Kulikowski CA. EXPERT: a system for developing
consultation models. In: Proc Sixth Int’l Joint Conf on Artificial
Intelligence. Tokyo: Information Processing Soc. of Jpn. 1979;9427.
26. Georgeff MP. Procedural control in production
systems.
Artif
Intell 1982;18:175-201.
27. Clancey WJ. The epistemology of a rule-based expert systema framework for explanation. Artif Intell 1983;20:215-51.
28. Clancey WJ, Letsinger R. Neomycin: reconfiguring a rulebased expert system. In Proc Seventh Int’l Conf on Artificial
Intelligence, Los Altos, CA: M Kaufmann Publishers, 1981:82936.
29. Bundy A, ed. Catalogue of artificial intelligence tools, 2nd rev.
ed. New York: Springer-Verlag, 1986.
30. Waterman DA. A guide to expert systems, The Teknowledge
series in knowledge
engineering.
Reading,
Massachusetts:
Addi-
son-Wesley, 1986.
31. Kwa HY, van der Lei J, Kors JA. Expert systems integrated
with information
systems. Computer Methods Programs Biomed
1987;25:327-32.
32. Groth T, Hakman M. A PC-workstation
supporting interpretation of clinical chemistry laboratory data. In: KerkhofPLM, van
Dieijen-Visser
MP, eds. Laboratory data and patient care. New
York and London: Plenum Press, 1988:147-57.
33. Groth T. Data base management and knowledge-based systems in clinical laboratory medicine. In: Kerkhof PLM, van Dieijen-Visser MP, eds. Laboratory data and patient care. New York
and London: Plenum Press, 1988:101-8.
34. “MiMER. The software machine. Concepts and facilities”. Camforth: Savanth Enterprises,
1984.
35. Pryor TA, Gardner RM, Clayton PD, Warner HR. The HELP
system. J Med Syst 1983;7:87-102.
36. Elevitch FR, Boroviczeny KG. Transfer data. A proposed
international standard for interlaboratory information exchange.
Arch Pathol Lab Med 1985;109:496-8.
37. Quaglini 5, Stefanelli M. ANEMIA: an expert consultation
system. Comp Biomed Res 1986;19:13-27.
38. Blomberg DJ, Ladley JL, Fattu JM, Patrick EA. The use of an
expert system in the clinical laboratory as an aid in the diagnosis
of anaemia. Am J Clin Pathol 1987;87:608-13.
39. Horvitz EJ, Heckerman DE, Nathwani BN, et al. Diagnostic
strategies in the hypothesis-directed
PATHFINDER
system. In: Proc
First Conf Artificial Intelligence Appl. Silver Spring, MD: Institute of Electrical and Electronics Engineers
Computer Society,
1984;630-6.
40. Fox J, Myers CD, Greaves MF, et al. Knowledge acquisition
1600
CLINICAL CHEMISTRY, Vol. 35, No. 8, 1989
for expert systems: experience in leukemia diagnosis. Methods Inf
Med 1985;24:65-72.
41. Smith JW, Svirbely JR, Evans CA, et al. RED: a red-cell
antibody identification expert module. J Med Syst 1985;9:121-38.
42. Miller PL, Blumenfrucht SJ, Black HR. An expert system
which critiques patient workup: modeling conflicting expertise.
Comp Biomed Res 1984;17:554-69.
43. Weiss SM, Kulikowski CA, Galen RB. Representing expertise
in a computer program: the serum protein diagnostic program. J
Clin Lab Automation 1983;3:383-7.
44. Trendelenburg C. Routine applications of the expert system
PRO.M.D. In: KerkhofPLM, van Dieijen-Visser MP, eds. Laboratory data and patient care. New York and London: Plenum Press,
1988.
45. Wulkan RW, Leijnse B. Experience with expert systems in
clinical chemistry. Ibid.
46. van Lente F, Castellani W, Chou D, Matzen RN, Galen RB.
Application of the EXPERT consultation system to accelerated laboratory testing and interpretation. Chin Chem 1986;32:1719-25.
47. Chang E, McNeely M, Gamble K. Strategies for choosing the
next test in an expert system. In: Proc Amer Assoc Med Syst and
Informatics Congress 1984. Bethesda, MD: American Association
of Medical Systems and Informatics, 1984:198-202.
48. Wiener FM, Groth T. A system for simulating medical reasoning (SMR) providing expertise in clinical computer applications.
Automedica 1987;8:141-9.
49. Miller PL. The evaluation of artificial intelligence systems in
medicine. Comput Methods Programs Biomed 1986;22:5-11.
50. Yu VL, Fagan LM, Wraith SM, et al. Antimicrobial
selection
by a computer a blinded evaluation by infectious disease experts.
J Am Med Assoc 1979;242:1279-82.
51. Quaglini 5, Stefanelli M. A performance
evaluation of the
expert system ANEMIA. Comp Biomed Res 1988;21:307-23.
52. Suwa M, Scott AC, Shorthiffe EH. Completeness and consistency in a rule-based system. In: Buchanan BG, Shortliffe EH, eds.
Rule-based expert systems. Reading, MA: Addison-Wesley,
1984:159-70.
53. Pohitakis P, Weiss SM. A system for empirical experimentation with expert knowledge. In: Clancey WJ, Shortliffe EH, eds.
Readings in medical artificial intelligence: the first decade. Reading, MA: Addison-Wesley, 1984:426-43.
54. Spackrnan KA, Connelly DP. Knowledge-based
systems in
laboratory medicine and pathology. Arch Pathol Lab Med
1987;111:116-9.
55. Korpmann
RA. Using the computer to optimize human perin health care delivery. Arch Pathol Lab Med
1987;111:637-45.
56. Myers JD. The computer as a diagnostic consultant, with
emphasis on use of laboratory data. Chin Chem 1986;32:1714-8.
57. Kerkhof PLM. Laboratory, patient and expert system as a
triad in patient care. In: Kerkhof PLM, van Dieijen-Visser MP,
eds. Laboratory data and patient care. New York and London:
formance
Plenum Press, 1988.
58. Winkel P, Bentzon MW, Statland BE. Predicting recurrence in
patients with breast cancer based on cumulative laboratory results. A new technique for the application of time series analysis.
Clin Chem 1982;28:2057-67.
59. Winkel P, Gaede F, Lyngbye J. Method for monitoring plasma
progesterone concentrations in pregnancy. Clin Chem 197622:4228.
60. de Dombal FT. Picking the best test in acute abdominal pain.
J R Coll Physicians Lend 1979;13:203-9.
61. Grover RG, Fries ED. A study of diagnostic errors. Ann Intern
Med 1957;47:108-20.
62. Dixon RH, LazloJ. Utilization of clinical chemistry services by
medical house staff. Arch Intern Med 1974;134:1064-7.
63. Durbridge TC, Edwards F, Edwards RG, Atkinson M. Evaluation of benefits of screening tests done immediately on admission
to hospital. Chin Chem 1976;22:968-71.
64. Adams ID, Chan M, Clifford PC. Computer-aided diagnosis of
abdominal pain: a multi-centre study. Br J Med 1986;293:800-4.
65. de Dombal FF, Hilden J. Computer-aided decision supportthe case in favour and the case against. Norwegian Medical
Association Publication (in press).