Evaluating the Impact of Electronic Health Records on Clinical Reasoning Performance

Evaluating the Impact of Electronic Health Records on Clinical Reasoning Performance

Matthew J. Wills Omar F. El-Gayar Amit V. Deokar Dakota State University Dakota State University Dakota State University willsm@pluto.dsu.edu omar.el-gayar@dsu.edu amit.deokar@dsu.edu

Abstract This paper adapts and extends the task-

technology fit model of performance to the health care domain and the clinical reasoning task. Central to this effort was careful adaptation of the task and technology characteristics constructs to the clinical reasoning task and electronic health record technology. Overall the results indicate a good fit between model and data. The contributions of this study include successful adaptation of a corner-stone information systems theory to a new domain and technology, a validated user evaluation instrument able to assess the impact of EHR use on clinical reasoning performance, and new insight on the factors that impact task-technology fit and clinical reasoning performance.

1. Introduction

In the U.S., electronic health records (EHR) have emerged as the foundation of health information technology. Although fewer than 20 percent of physician practices have adopted the technology [20], recent directives and incentives from the U.S. federal government call for significant expansion of EHR adoption.

The practice of medicine is unlike any other vocation. Few other domains combine the complexity and uncertainty of decision-making as clinical medicine does. Clinical decisions are often a matter of life and death, and they are frequently made in a context where best practices, cost control, ethics and bias issues collide on a regular basis. With increasing adoption and use of health information technologies such as EHR systems, it is critical that we attempt to better understand how clinical reasoning performance is impacted by system use.

At the heart of this research is the goal of addressing a gap in the literature. This gap is understood as the lack of a tested, validated instrument for evaluating and predicting the impact of EHR use on clinical reasoning performance. While

a variety of instruments exist for evaluating a number of important research questions pertaining to EHR, it does not appear as though any instruments deal specifically with the important issue of clinical reasoning performance.

From the information systems research tradition, this research uses task-technology fit (TTF) theory as the foundation for an evaluation instrument. TTF provides a theoretically grounded and empirically validated framework for evaluating perceived performance impacts resulting from information system use [31]. The premise of TTF is that individual performance will be enhanced when the functionality of the technology meets the user’s needs, i.e., fits the task at hand. The original TTF instrument was developed for the evaluation of multiple information systems and focused on managerial decision-making in the transportation and insurance industries [31].

Despite successful application to a variety of other industries, TTF has not been adequately adapted to healthcare, EHR technology or the clinical reasoning task. Accordingly, the objectives of this research are to: 1) produce a valid instrument with diagnostic and predictive capabilities for evaluation of clinical reasoning performance with electronic health records, and 2) extend and validate the TTF model to the clinical domain with an emphasis on specification of the clinical reasoning task and EHR technology characteristics.

Section 2 of this paper includes a focused review of the literature on clinical reasoning, health information technology evaluation and information systems performance evaluation research. A task model is proposed and its constructs are presented in Section 3. Section 4 addresses the research methodology, including data analysis. Section 5 discusses the results and the paper is concluded in section 6 with a summary of findings and future work.

2012 45th Hawaii International Conference on System Sciences

978-0-7695-4525-7/12 $26.00 © 2012 IEEE DOI 10.1109/HICSS.2012.254

2830

2. Related work

2.1 EHR and clinical reasoning

The electronic health record is an aggregate electronic record of health-related information on an individual that is created and gathered cumulatively across more than one health care organization. It is managed and consulted by licensed clinicians and staff involved in the individual’s health and care. The EHR is not one specific technology; rather it is often understood as a composite of technologies including computerized provider order entry, clinical decision support plus administrative, laboratory and imaging systems.

Clinical reasoning is the broad term used to describe clinical problem-solving and decision- making. These terms are often used interchangeably, however it is important to note that problem-solving and decision-making represent two unique research paradigms in the cognitive sciences. Clinical decision-making typically refers to diagnostic and therapeutic decision-making while clinical problem- solving is understood as the steps involved in finding a solution to the problem [15]. Here, the term clinical reasoning is used to describe both paradigms.

Although research on clinical reasoning has a tradition spanning decades, there exists no unified theory or explanation for how clinicians reason. Many of the existing theories differ only in the emphasis or terminology of the strategies used, rather than on the strategies themselves. Despite the theoretical variation of existing decision models, common themes and strategies have emerged from the cognitive literature. For example, present-day models generally agree that clinical reasoning can be understood as being either informal/intuitive or formal/analytical in nature, or some combination of both [16; 23; 24; 46].

Informal/intuitive reasoning is enhanced through the use of heuristics and pattern matching; strategies which are largely possible due to the progressive accumulation of domain knowledge over time and clinical experience [23; 16]. The application of “rules of thumb” and the ability to identify or categorize patterns is dependent on time and clinical practice [15]. Conversely, the analytical strategies used for clinical reasoning are only possible due to the use of specific learned techniques, such as hypotheses testing or probability estimation (i.e., Bayes theorem).

Goodhue [31] originally designed TTF around the task of managerial decision-making. To extend this model to the clinical domain and the clinical reasoning task, this question needs to be addressed:

How is clinical reasoning different from managerial decision-making? To answer this question, consider three possible types of decisions that might arise during patient care: 1) the evaluation of signs and symptoms to formulate a diagnosis, 2) decisions about further tests needed to refine a diagnosis, and 3) treatment selection.

2.1.1 Diagnosis formulation: Clinical diagnosis is similar in many ways to diagnostic problems that arise in business and in everyday life. However, the clinical diagnostic task has a high degree of complexity and uncertainty that makes it unique. First, consider that there are thousands of diseases that can cause signs and symptoms. Second, each of these diseases can cause many different signs and symptoms. Third, the signs and symptoms of these diseases overlap; that is, most can be caused by more than one disease. Fourth, the relationships between diseases and signs and symptoms are uncertain. For each disease and every sign or symptom, there is a probability that each sign or symptom will occur with that disease, thus creating thousands of probabilistic relationships. To make matters worse, most of these probabilities are not well known.

2.1.2 Test selection to refine diagnosis: The next step in diagnosis is assessing the need for additional information, choosing which tests or procedures should be done, and interpreting the results relative to the patient’s diagnosis and management. After evaluating a patient’s signs and symptoms, the physician may be uncertain about which disease the patient has. The decision to obtain additional information is complicated by the fact that there are usually several diagnostic tests and procedures to choose from; their uses overlap; none are likely to be conclusive; and each has risks, financial impact, and may have negative effects on the patient. Because of this, the clinician must assess the value of the information each test can provide and compare this to the procedure’s risks, side effects, and costs. Moreover, the clinician must compare the test’s expected impact with the expected impact of other tests that could be ordered.

2.1.3 Treatment selection: In choosing a treatment, the clinician needs to understand how each possible treatment can affect each outcome that the patient considers important. Equally important, the clinician must understand how the patient values each outcome. Treatment selection is further challenged by considerable uncertainty regarding the effects of treatment on outcomes. Patients may respond differently to similar treatments or they might have multiple diseases whose treatments interact.

2831

Finally, each of the above decisions must be made within the context of a massive body of information and knowledge. In no other field, including managerial decision-making is the decision task so dependent on such vast subject knowledge.

Real diagnostic problems involve many signs, symptoms, and tests; many diseases; uncertainty about the baseline probabilities of the diseases; uncertainty about the probabilities of the signs, symptoms, and test results; and dependencies between the signs, symptoms, and test results.

2.2 Health IT Evaluation

The use of health information technology offers a number of opportunities to improve health care. From reduction of clinical errors to improving efficiency and quality of care, there is mounting evidence to support the notion that information technology plays a critical role in the future of health care [9]. At the same time, there are potential pitfalls that must be avoided. Health information technology is expensive, and the failure of such systems could have negative effects on patients, staff and organizations. Given what is at stake, evaluation of health information technology is a valuable and necessary activity.

Evaluation studies have focused on a variety of questions. Some studies have questioned the usability of the technology while others have asked which technical/system features affect its use. Evaluation research has examined how users [31] and patients adopt and accept information technology, and the impact of information technology on structural and/or process quality has also been studied

Health information technology has also been evaluated for the investment, operational costs and cost-effectiveness of implementation, as a vehicle for implementing performance measures and implementation best practices. Feasibility and pilot studies are also common.

There are a number of challenges to evaluating health information technology. Among them is the complexity of the information technology itself, the complexity of the evaluation project, and the motivation for the evaluation [2]. Information systems are defined not only by their hardware and software components, but also by the social and behavioral processes of system use. This socio- technical complexity makes evaluation of information technology difficult on a number of different levels.

Another major challenge to evaluation of health information technology is the complexity of the overall evaluation project. Stakeholders in a health information technology project may have different notions of what constitutes “successful” information

technology. Moreover, evaluation can be done from a variety of perspectives — from economic, technical, organizational, individual, administrative or clinical views, to name a few. As Ammenwerth [2] points out, each perspective brings with it a multitude of choices about evaluation approach (objective v. subjective), methods (quantitative v. qualitative) and study design (randomized controlled trial v. observational).

A third major obstacle to health information technology evaluation is the motivation of stakeholders and participants. It can be difficult to recruit study participants who may already be burdened with learning a new system and for whom the benefits of participation may not be known or appreciated. Support from management is essential to participant recruitment.

2.3 Information Systems Utilization and Performance Research

With respect to the behavioral determinants of use, the Technology Acceptance Model (TAM) represents the first theory specifically established for the information systems (IS) context [17]. Other variations followed, including Combined Technology Acceptance Model –Theory of Planned Behavior (TAM-TPB) [55], Technology Acceptance Model 2 (TAM2) [58], the Unified Theory of Acceptance and Use of Technology (UTAUT) [59] and Technology Acceptance Model 3 (TAM3) [57] .

Contrasted with models that predict acceptance and use, TTF attempts to explain user performance with information systems. In other words, the focus of TTF is on the outcome of the use-to-performance chain. The theory measures task-technology fit along multiple dimensions. Goodhue also demonstrated the validity of an instrument for information systems user evaluation based on TTF [29]. Later, it was established that user evaluations were effective surrogates for objective performance [30].

TTF has been examined in group performance situations [64; 53], as intended with the focus on managerial decision-making [25], and has been further examined with an emphasis on ease-of-use [43]. TTF has also been extended with the technology acceptance model [22; 39; 48]. More recently, TTF has been the theoretical basis for a number of studies evaluating user performance with information systems. Vlahos et al. [60] investigated German managers and their use, perception of value and satisfaction with information technology. These researchers discovered that the TTF model was optimized when it included resource allocation, alternatives evaluation, problem identification and short-term decision making. Another study combined

2832

TTF with a cognitive element from Social Cognitive Theory (SCT) [40], investigating knowledge management system (KMS) usage in information technology. Here, perceived TTF, KMS self-efficacy and personal and performance outcome expectations were found to have a significant impact on use. Figure 1 illustrates the TTF model.

Figure 1. Task-Technology Fit Model (Goodhue 1995b)

Another study addressed knowledge management (KM), technology usage and performance, this time in the context of a Chinese consulting firm [56]. Here, the investigators determined that output quality, data compatibility and knowledge tacitness (an extension of Goodhue’s original model) were positively related to usage. The authors also concluded that utilization and compatibility were positively related to performance, and TTF was more strongly related to performance than utilization. Other research examined TTF in the context of mobile information systems [37], where the TTF construct of data locatability was examined in significant detail. Zigurs et. al [65] applied the theoretical perspective of frames to the challenges of virtual collaboration technologies.

The application of TTF in the healthcare domain has been quite limited to date. With the exception of Kilmon et al. [38] and Wills et al. [61], there are no studies employing TTF in user evaluation of EHR systems. Kilmon et al. [38] utilize the TTF instrument presented in Goodhue [31] as a diagnostic tool to evaluate a first-phase implementation of an EHR at a university hospital. While the results indicated that the system implementation was a success in terms of the task-technology fit, the study does not validate the TTF instrument in the healthcare context. Moreover, the study did not attempt to evaluate performance impact or the relationship between TTF and performance impact.

TTF has been and remains a suitable candidate for adaptation to other domains. As a model for evaluating clinical reasoning performance, TTF holds the potential to shed light on the relationships between EHR and clinical reasoning characteristics, their impact on task-technology fit, and the subsequent effects on utilization and performance.

3. The underlying TTF model adapted to the clinical domain

3.1 Task-Technology Fit

In building a task-fit process model for managerial decision making, Goodhue established three processes by which managers come to use organizational information: 1) identification of the data, 2) acquisition of the data, and 3) interpretation of the data. In the first step, formulating the structure of the problem leads to identification of the information needed to solve it. Goodhue [31] notes that identification may also be interconnected with choices about appropriate decision strategies. Once it has been determined that information is needed, the decision to acquire it is made. Acquisition requires the use of hardware and software to search for and extract the needed data. Interpretation and integration of the acquired data can be facilitated by computer support or other means; however, this third step is also dependent on the accuracy, credibility, presentation and compatibility of the data [31].

Clinicians pursue and use health information in much the same way as noted by Goodhue [31]. Once the decision to pursue information is made, the processes of identification, acquisition and interpretation begin. Section 2.1 discussed the three possible types of decisions clinicians may make: diagnostic formulation, diagnostic refinement (test selection) and treatment selection. During diagnostic formulation, the structure of the clinical problem is defined, leading to identification of the information needed to solve it. Following this, the clinician will acquire the information needed to refine the diagnosis. This may involve acquiring specialized information or it may involve the selection and ordering of further diagnostic tests. With the required information identified and acquired, the clinician will integrate and interpret it, leading to the selection of a treatment.

Essential to the identification process is obtaining the right data, the appropriate level of detail, and the correct semantics, or meaning for the data. Acquisition of data is dependent on accessibility, ease-of-use, training – such as effective search techniques or system training — and system reliability. Interpretation of the data requires accuracy, credibility (confidence), effective presentation, and compatibility of data between systems.

2833

3.2 EHR technology characteristics

Nolan et al. made some of the first characterizations of information technology, based on the concept of information systems maturity [44]. Information systems maturity refers to the condition in which information resources are at their fullest potential (fully developed), totally integrated and interoperable [51]. Additional work in this area has been undertaken with respect to identifying the criteria of information systems maturity or sophistication [11; 32; 52; 41] with Nolan’s work serving as the basis for most of the research that followed.

Unfortunately there is less guidance in the literature regarding the characterization of technology in the TTF model. In many cases, definition of these characteristics is completely omitted in favor of reduced models. The difficulty with respect to assigning such characteristics is more than evident in the literature, most notably Goodhue’s [31] seminal paper where technology characteristics were represented with the proxy variables “system used” and “department of the respondent”. Proxy (dummy) variables were used because Goodhue’s study examined TTF for 25 different systems across two companies. Capturing and measuring such a vast array of characteristics was not feasible.

Two TTF studies conducted by Dishaw and Strong [21; 22] provide some guidance on technology characterization. The characteristics of the technology are defined according to the system functionality. For example, one study describes the technology according to production and coordination functionality [21]. These definitions are a direct reflection of the task activities.

In the medical informatics literature, there is no broad agreement on how to characterize EHR’s. Following the work of Dishaw and Strong [21; 22], organizations such as the International Organization for Standardization suggest that EHR’s can be defined according to three basic functions: 1) information functions, 2) knowledge functions, and 3) inferencing functions [35]. Information function in this context is understood as the provision of raw data, such as the recording and presentation of patient vital statistics. Knowledge function means that the system provides formalized knowledge beyond raw data, such as that contained in clinical guidelines or comparative effectiveness information. Finally inferencing functionality refers to the ability of the system to assist with the clinical reasoning process. The best example of this functionality is represented in the capability of clinical decision support systems.

3.3 Clinical reasoning task characteristics

Based on the literature for complex systems [50], complex tasks [8]; and information processing [19], two major characteristics of the clinical reasoning task are suggested: structural complexity and dynamic uncertainty. Structural complexity captures the configuration of the components and procedures of the task whereas dynamic uncertainty captures the unpredictable nature of the task.

In the context of patient care, the perceived complexity and uncertainty of the task determine in part the decision strategy used during clinical reasoning [24; 7; 23; 49; 10; 33]. Two reasoning paradigms have approached the task in unique ways. The problem-solving research tradition has been largely focused on describing the complexity of clinical reasoning by expert physicians. The psychological decision research tradition has been guided by statistical models of reasoning under uncertainty [23].

Task complexity refers to the degree of perceived difficulty of making a decision or reasoning through a series of decisions. Task complexity is composed of three dimensions: component complexity, interactive complexity, and procedural rigidity [5]. Component complexity represents the multiplicity of the task components, (e.g., number of people assigned, variety of organizations being represented, computer systems being accessed and used, machines required, and variety of resources required to complete the task). Interactive complexity represents the degree of interactions and interdependencies among the components of the task, (e.g., the inter-connectedness of the people and different organizations involved in a given task). Procedural rigidity represents the lack of flexibility in terms of the sequencing and durations of the task components.

Task uncertainty refers to the perceived level of uncertainty, or ambiguity in decision-making, and is composed of three dimensions: task novelty, task unanalyzability, and task significance [5]. Task novelty captures the newness (unexpected and novel events that occur in performing the task) and non- routineness (exceptional circumstances requiring flexibility) of the task [18]. Task unanalyzability represents the degree to which the task is unstructured and the information required to perform the task is equivocal thus leading to conflicting interpretations [18; 19]. Task significance captures the urgency and impact of the task.

2834

3.4 Utilization

Goodhue [31] notes that the ideal measure of utilization is the proportion of times that the users choose to use a system. In the field context, however, this proportion is difficult to measure because EHR use is mandatory. Following Goodhue [31], the utilization construct will be operationalized by asking users if they plan to use the EHR in the future and whether or not they are currently using it.

3.5 Performance

Performance impact will be measured by perceived impact, since objective measures of actual decision performance are not available in this field context. Three questions will be used to ask respondents to report on the perceived impact of electronic health record use on clinical reasoning performance.

4. Research methodology

4.1 Setting, context and subjects

The quantitative portion of the study was conducted at a regional medical center in South Dakota, USA. Subjects included 117 physicians, 20 advance practice nurses and 24 physician assistants who currently use an EHR system in clinical practice. Forty-nine subjects were between the ages of 55-64, fifty-eight subjects were aged 45-54, thirty-nine subjects were aged 35-44 and fifteen noted their age in the 25-34 year range. 131 subjects worked in a “clinic/physician office” and thirty subjects selected “acute care hospital” as their worksite.

The qualitative portion of this study was conducted on the main campus of the regional medical center in office space provided by the medical center.

4.2 Data collection procedures

Data collection included both qualitative and quantitative methods. In the first phase of this study, semi-structured interviews with clinicians focused on elaboration and refinement of the technology, TTF and task characteristics constructs. Notices were posted in appropriate areas notifying clinicians of the study opportunity a week in advance of the start date. Over a three-day period, nineteen complete interviews were conducted. A total of eight physicians (MD/DO), five certified nurse practitioners (CNP) and six physician assistants (PA)

participated. No compensation was offered, and interviews were not video or audio recorded for confidentiality.

In the second phase of the study, the survey instrument was first pretested with a sample group of 11 clinicians to address the clarity of the questions. Several questions on the test survey were revised as a result of this exercise, however due to size limitations they will not be discussed here.

258 clinicians were sent an initial email invitation to participate in the online survey. The email originated from the medical center’s clinical informatics department and included an attached letter of endorsement by the Director of Clinical Research. Three additional emails were sent approximately two weeks apart resulting in 42 responses, of which nine were incomplete and could not be used.

Simultaneously, paper surveys were collected from the same pool of initial invitees who had not responded to the online invitation. A total of 119 paper surveys were collected at affiliated clinics, physician offices and the medical center main campus. A total of 137 hours of investigator time was logged to accomplish this response. As a result, a 62% response rate was achieved.

4.4 Data analysis

Initially the intent was to utilize a complicated set of individual regressions using proxy variables for analysis of the data; however this method was unnecessarily cumbersome given the model’s overall complexity. Structural equation modeling was chosen using the partial least squares (PLS) method. This method was chosen for two reasons: First, PLS is designed to explain the significance of the relationships in the model, such as in linear regression. For this reason PLS is better suited to predictive modeling than covariance SEM which is primarily concerned with model fit. Second, estimation of significance does not require parametric assumptions, thus permitting analysis of smaller data sets.

To evaluate the measurement model, PLS estimates the internal consistency for each block of indicators. PLS then evaluated the degree to which a variable measures what it was intended to measure [14; 54]. This evaluation assesses construct validity, which is composed of convergent and discriminant validity [54]. Convergent validity of the variables is assessed by examining the t-values of the outer model loadings. Discriminant validity is evaluated by comparing item loadings to variable correlations and by examining the square root of the AVE of each

2835

variable to the correlations of this construct to all other variables.

With respect to the structural model, path coefficients are understood as regression coefficients with the t-statistic calculated with a bootstrapping method of 200 samples. To determine how well the model fits the hypothesized relationship, PLS calculates an R2 for each dependent construct in the model. Like regression analysis, R2 represents the proportion of variance in the endogenous constructs which can be explained by the antecedent constructs.

5. Results and discussion

Due to size limitations, an extensive results summary table is not included here. The results, however, show composite reliability exceeding 0.8 as recommended [47]. AVE, which can also be considered as a measure of reliability exceeds 0.5 as suggested [26]. The t-values of the outer model loadings exceed 1.96, thus verifying convergent validity of the instrument [27].

Figure 2 depicts the structural model with path (regression) coefficients and the R2 for the dependent variables, TTF, Use and Performance. As shown, the R2 values for the TTF and Performance constructs are 0.679 and 0.257, respectively. The model explains 67.9% of the variance with respect to TTF, and 25.7% of the variance for Performance.

Figure 2. Structural model

With respect to the hypothesized determinants of TTF, Task Characteristics significantly influenced TTF (�=0.252 p>0.001) and Technology Characteristics also showed a strong influence on TTF (�=0.726 p<0.0001). The direct path from TTF ��� ��� �� �� �� ���� ����� ����� � ���� ��������� p<0.0001). As expected, TTF did not significantly influence use; utilization of the EHR by subjects was mandatory in this setting, and as such, improvements to task-technology fit would likely have no impact with respect to a required activity. Interestingly, the path from use to performance was significant �����������������, perhaps suggesting that use of the

EHR is an important precondition to performance gains or losses.

6. Conclusion and future work

In this study we report on user evaluations of electronic health records using task-technology fit as the underlying model. Using the original TTF model proposed by Goodhue [31], we adapted it to the healthcare industry, and in this case, specifically extended the model to evaluate the impact of EHR use on clinical reasoning performance.

The primary construct targets for this study were task and technology characteristics. Prior research has not adequately addressed these constructs in the healthcare domain. In the context of clinical reasoning, we correctly postulated that task complexity and uncertainty would significantly influence the fit between technology and task (the TTF construct).

In a similar fashion, we developed a set of indicators which defined the characteristics of EHR technology. We based these on patterns in prior research which suggested that technology can be characterized by its functionality, in this case by the functions of information and knowledge provision, as well as inferencing support.

One contribution of this study is an evaluative framework for understanding the factors that impact clinical reasoning performance. Performance of this task, complex and uncertain as it is, can be enhanced when the technology meets the demand for current, accurate, detailed information, knowledge and decision support. Another key contribution is a validated instrument for use by researchers, health care administrators and executives, as well as clinicians. Such an instrument may be used to predict the impact on clinical reasoning performance, or it may simply be used to understand how an existing system could be improved to support better clinical decisions. Finally, this study extended a cornerstone IS performance theory (TTF) to a new domain, and demonstrated the continued relevance of TTF theory to modern information systems challenges.

Some limitations worth noting include the nature of the technology under investigation. Each health system will have its own brand of EHR, they may be implementing in phases, and each organization has its own unique information culture in which the technology is being implemented, adopted and used. These variations may offer different results than those obtained here. Another limitation of this study is that the participating organization had several years of experience with EHR and was at an advanced stage in terms of implementation and use. Future

2836

studies using this model may consider adapting the technology characteristics construct to meet the specific needs of the organization under investigation, and test the model during various phases of implementation and use.

7. References

[1] Al-Gahtani, S. S. (2001). “The applicability of TAM outside North America: An empirical test in the United Kingdom.” Information Resources Management Journal 14(3): 37-46.

[2] Ammenwerth, E., et al.,(2003) Medical Informatics and the Quality of Health: New Approaches to Support Patient Care, Methods of Information in Medicine, 42; 185-189

[3] Bagozzi, R.P. (1979) The role of measurement in theory construction and hypothesis testing: Toward a holistic model. In O.C. Ferrell, S.W. Brown, and C.W. Lamb, Jr. (Eds), Conceptrual and theoretical developments in marketing. Chicago, IL: American Marketing Association

[4] Bagozzi, R.P. (1980) Causal models in marketing. New York: John Wiley

[5] Becerra-Fernandez, I., Xia, Weidong, Gudi, Arvind, Rocha, Jose. (2008). Emergency management task characteristics, knowledge sharing and integration, and task performance: Research agenda and challenges. Presentation and proceedings of ISCRAM 2008 5th International Conference on Information Systems for Crisis Response and Management, May 4-7 2008 Washington DC, USA.

[6] Bianchi, M. T., B. M. Alexander, et al. (2009). “Incorporating Uncertainty Into Medical Decision Making: An Approach to Unexpected Test Results.” Medical Decision Making 29(1): 116-124.

[7] Bloom, L. A. and B. S. Bloom (1999). “Decision analytic modeling in health care decision making – Oversimplifying a complex world?” International Journal of Technology Assessment in Health Care 15(2): 332-339.

[8] Campbell, D. J. (1988). Task Complexity: A Review and Analysis. Academy of Management. The Academy of Management Review, 13(1), 40.

[9] Cantrill, S.V. (2010) Computers in Patient Care: The Promise and the Challenge. Communications of the ACM, 53(9), pp. 42-47

[10] Charlin, B., R. Gagnon, et al. (2006). “Assessment of clinical reasoning in the context of uncertainty: the effect of variability within the reference panel.” Medical Education 40(9): 848-854.

[11] Cheney, P. H., Dickson, G.W. (1982). “Organizational characteristics and information systems: an exploratory investigation.” Acad. Mgmt 25(1): 170-182.

[12] Chin, W. W., Ed. (1998). The Partial Least Squares Approach for Structural Equation Modelling. Modern Methods for Business Research. Hillsdale, NJ, Lawrence Erlbaum Associates.

[13] Compeau, D. R., Higgins, C. A. (1995). “Computer Self-Efficacy – Development of a Measure and Initial Test.” MIS Quarterly 19(2): 189-211.

[14] Cronbach, L. (1951). “Coefficient alpha and the internal structure of tests.” Psychometrika 16: 297-334.

[15] Croskerry, P. (2002). “Achieving quality in clinical decision making: cognitive strategies and detection of bias.” Acad Emerg Med 9(11): 1184-1204.

[16] Croskerry, P. (2005). “The theory and practice of clinical decision-making.” Can J Anesth 52(6): R1-R8.

[17] Davis, F. D. (1989). “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13(3): 319-340.

[18] Daft, R., MacIntosh, N. (1981) A tentative exploration into the amount and equivocality of information processing in organizational work units. Administrative Science Quarterly, 26(2), pp207-224

[19] Daft, R., Lengel, R., (1986) Organizational Information requirements, Media Richness and Structural Science, Management Science, 32(5)

[20] DesRoches, C. M., Campbell, E. G., Rao, S. R., Donelan, K., Ferris, T. G., Jha, A., Kaushal, R., Levy, D. E., Rosenbaum, S., Shields, A. E., Blumenthal, D. (2008). “Electronic Health Records in Ambulatory Care: A National Survey of Physicians.” The New England Journal of Medicine 359(1): 50-60.

[21] Dishaw, M. T., Strong, D. M. (1998). “Assessing software maintenance tool utilization using task-technology fit and fitness-for-use models.” Journal of Software Maintenance: Research and Practice 10(3): 151-179.

[22] Dishaw, M. T., Strong, D. M. (1999). “Extending the technology acceptance model with task-technology fit constructs.” Information & Management 36(1): 9. Edwards, I., M. Jones, et al. (2004). “Clinical reasoning strategies in physical therapy.” Physical Therapy 84(4): 312-330.

[23] Elstein, A. S., Schwarz, A. (2002). “Clinical problem solving and diagnostic decision making: selective review of the cognitive literature.” BMJ 324: 729-732.

[24] Elstein, A. S., Shulman, L.S., Sprafka, S.A.. (1978). Medical problem solving: an analysis of clinical reasoning. Cambridge, Mass.:, Harvard University Press.

2837

[25] Ferratt, T. W., Vlahos, G. E. (1998). “An investigation of task-technology fit for managers in Greece and the US.” European Journal of Information Systems 7(2).

[26] Fornell, C., and Larcker, D. F. (1981) Structural Equation Models with Unobservable Variables and Measurement Error, Algebra and Statistics, Journal of Marketing Research, 18, 3, 382-388

[27] Gefen, D., Straub, D. (2005). “A practical guide to factorial validity using PLS-GRAPH: Tutorial and annotated example.” Communication of the AIS 16: 91- 109.

[28] Gefen, D., Straub, D. et al. (2000). “Structural Equation Modeling and Regression: Guidelines for Research Practice.” Communication of the AIS 4(7): 1-76.

[29] Goodhue, D. L. (1998). “Development and measurement validity of a task-technology fit instrument for user evaluations of information systems.” Decision Sciences 29(1).

[30] Goodhue, D. L., Klein, B. D., March, S. T. (2000). “User evaluations of IS as surrogates for objective performance.” Information & Management 38(2).

[31] Goodhue, D. L., Thompson, R. L. (1995). “Task- technology fit and individual performance.” MIS Quarterly 19(2).

[32] Gremillion, L. (1984). “Organizational size and information systems use: an empirical study.” J. Mgmt. Inform.Sys 1(2): 4-17.

[33] Hozo, I., M. J. Schell, et al. (2008). “Decision-making when data and inferences are not conclusive: Risk-benefit and acceptable regret approach.” Seminars in Hematology 45(3): 150-159.

[34] Hu, P. J. H. (2005). “User acceptance of intelligence and security informatics technology: A study of COPLINK. .” Journal of the American Society for Information Science and Technology 56(3): 235-244.

[35] ISO (2005). Health Informatics – Electronic Health Record – Definition, scope and context, International Organization for Standardization.

[36] Jorgensen, T. (1995) Measuring Effects, in E.M.S.J. van Gennip, J.L. Talmon (Eds) Assessment and evaluation of information technologies, Studies in Health Technology and Informatics, vol. 17, IOS Press, Amsterdam, pp. 99- 109

[37] Junglas, I., Abraham, C., Watson, R. T. (2008). “Task- technology fit for mobile locatable information systems.” Decision Support Systems 45(4).

[38] Kilmon, C. (2008). ” Using the Task Technology Fit Model as a Diagnostic Tool for Electronic Medical Record Evaluation.” Issues in Information Systems 9(2).

[39] Klopping, I., McKinney, E. (2004). “Extending the Technology Acceptance Model and the Task-technology Fit Model To Consumer E-Commerce.” Information Technology, Learning, and Performance Journal 22(1).

[40] Lin, T.-C., Huang, C.C. (2008). “Understanding knowledge management system usage antecedents: An integration of social cognitive theory and task technology fit.” Information & Management 45(6).

[41] Mahmood, M. A., Becker, J.D. (1985). “Impact of organizational maturity on user satisfaction with information systems.” Proceedings of the 21st AnnualComputer Personnel Research Conference: 134- 151.

[42] Marcoulides, G. A. and C. Saunders (2006). “PLS: A silver bullet?” MIS Quarterly 30(2): III-IX.

[43] Mathieson, K., Keil, M. (1998). “Beyond the interface: Ease of use and task/technology fit.” Information & Management 34(4).

[44] Nolan, R. L. (1973). “Managing the computer resource: a stage hypothesis.” Comm. ACM 16(7): 399- 405.

[45] Nolan, R. L. (1979). “Managing the crisis in data processing.” Harvard Bus. Rev: 115-126.

[46] Norman, G. (2005). “Research in clinical reasoning: past history and current trends.” Medical Education 39(4): 418-427.

[47] Nunnally, J.C. (1978) Psychometric Theory, 2nd ed., McGraw-Hill, New York

[48] Pagani, M. (2006). ” Determinants of adoption of High Speed Data Services in the business market: Evidence for a combined technology acceptance model with task technology fit model.” Information & Management 43(7).

[49] Parmigiani, G. (2002). “Measuring uncertainty in complex decision analysis models.” Statistical Methods in Medical Research 11(6): 513-537.

[50] Perrow, C. (1984). Normal Accidents: Living with high-risk technologies. Princeton, New Jersey: Princeton University Press.

[51] Raymond, L., P , G. (1992). “Measurement of IT sophistication in small manufacturing businesses.” Inform. Res. Mgmt 5(2): 4-16.

[52] Saunders, G. L., Keller, R.T. (1984). A study of the maturity of the information systems function, task characteristics and inter-departmental communication: the importance of information systems-organizational fit,. Proceedings of the International Conference on Information Systems.

2838

[53] Shirani, A. I., Tafti, M. H. A., Affisco, J. F. (1999). “Task and technology fit: A comparison of two technologies for synchronous and asynchronous group communication.” Information & Management 36(3).

[54] Straub, D. W., Boudreau, M.C., Gefen, D. (2004). “Validation guidelines for IS positivist research.” Communications of the AIS 13(24): 380-427.

[55] Taylor, S., Todd, P. A. (1995). “Understanding information technology usage – a test of competing models.” Information Systems Research 6(2): 144-176.

[56] Teo, T. S. H., Men, B. (2008). “Knowledge portals in Chinese consulting firms: a task-technology fit perspective.” European Journal of Information Systems 17(6).

[57] Venkatesh, V., Bala, H. (2008). “Technology acceptance model 3 and a research agenda on interventions.” Decision Sciences 39(2): 273-315.

[58] Venkatesh, V., Davis, F. D. (2000). “A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies.” Management Science 46(2): 186-204.

[59] Venkatesh, V., Morris, M. G., Davis, G. B., Davis, F. D. (2003). “User acceptance of information technology: Toward a unified view.” MIS Quarterly 27(3): 425-478.

[60] Vlahos, G. E., Ferratt, T. W., Knoepfle, G. (2004). “The use of computer-based information systems by German managers to support decision making.” Information & Management 41(6).

[61] Wills, M., El-Gayar, O., Deokar, A. (2009). Evaluating Task-technology Fit and User Performance for an Electronic Health Record System. 15th Americas Conference on Information Systems.

[62] Wills, Matthew J.; Sarnikar, Surendra; El-Gayar, Omar F.; and Deokar, Amit V. (2010). “Clinical Knowledge Management Systems: Literature Review and Research Issues for Information Systems.” Communications of the Association for Information Systems,Vol. 26, Article 26 Available at: http://aisel.aisnet.org/cais/vol26/iss1/26

[63] Wills, M., El-Gayar, O.F., Sarnikar, S. Beyond Meaningful Use: A Model for Evaluating Electronic Health Record Success (2011) Proceedings of the 44th Annual Hawaii International Conference on System Sciences, January 2011, Koloa, Kauai, Hawaii. Forthcoming.

[64] Zigurs, I., Buckland, B. K. (1998). “A theory of task/technology fit and group support systems effectiveness.” MIS Quarterly 22(3).

[65] Zigurs, I., Khazanchi, D. (2008). “From Profiles to Patterns: A New View of Task-Technology Fit.” Information Systems Management 25(1).

2839


Comments are closed.