The Incremental Benefits of a Forensic Accounting Course on Skepticism and Fraud-Related Judgments

The Incremental Benefits of a Forensic Accounting Course on Skepticism and Fraud-Related Judgments

Tina D. Carpenter, Cindy Durtschi, and Lisa Milici Gaynor

ABSTRACT: This study examines the extent to which providing a course that empha- sizes forensic accounting influences students’ fraud-related judgments. We follow a cohort of students �trained students� who have enrolled in a forensic accounting course and examine their fraud judgments at various points in time—the first day of instruction, the last day of instruction, and seven months later. We compare these fraud judgments to a control group of students who have completed a typical two-course audit sequence �untrained students� and to a panel of fraud experts. We find that when confronted with a bad debt expense account that is unusually small, trained students provide signifi- cantly higher initial risk assessments post-training �1� than they did pre-training and �2� than did the untrained students. In addition, after being presented with a set of potential fraud-risk factors, trained students provided higher revised risk assessments post- training than they did pre-training, and importantly, not significantly different from a panel of experts. Using risk assessments as an indication of skepticism, we infer that the forensic accounting course raised the students’ level of skepticism. We also find, in general, that post-training students assigned higher relevancy ratings to fraud-risk fac- tors than did a panel of experts, while the untrained students ascribed significantly less relevance than the experts did to these same facts. Finally, we find seven months after the course that the trained students’ performance is sustained, suggesting that the effects produced by taking a fraud-specific forensic accounting course persist.

Keywords: risk assessment; fraud; training; skepticism; fraud-related judgments; per- sistence of training effects.

Data Availability: Available upon request.

Tina D. Carpenter is an Assistant Professor at The University of Georgia, Cindy Durtschi is an Associate Professor at DePaul University, and Lisa Milici Gaynor is an Assistant Professor at the University of South Florida.

We express special thanks to Charlie Bame-Aldred, Michael Bamber, Greg Gerard, Jackie Hammersley, Greg Jenkins, Lindsay Jorgensen, Sundaresh Ramnath, Jane Reimers, Srinivasan Sankaraguruswamy, Brad Schafer, Nathan Stuart, Ste- fanie Tate, and Arnie Wright for their helpful comments. This paper has also benefited from the comments of workshop participants at Florida State University, University of Miami, University of South Florida, and the AAA Auditing Midyear Meeting. We would also like to thank Chris Jones and Norma Montague for their research assistance.

Editor’s note: Accepted by Kent St. Pierre.

ISSUES IN ACCOUNTING EDUCATION American Accounting Association Vol. 26, No. 1 DOI: 10.2308/iace.2011.26.1.1 2011 pp. 1–21

Published Online: February 2011

1

INTRODUCTION

Most large universities offer accounting students a two-course audit series. However, theincreased emphasis by the accounting profession on fraud training has led some univer-sities to adopt a third course in the audit series that specializes in forensic accounting. The purpose of this paper is to examine the incremental benefit of such a course on students’ fraud-risk judgments. Specifically, we examine whether completion of a forensic accounting course affects a student’s ability to make risk assessments and judge the relevancy of fraud-risk factors. We study this by comparing the fraud-related judgments of two groups of students to the fraud-related judgments of a panel of experts. One group of students has completed a typical two-course audit sequence; the second group has completed the same two-course sequence as well as a specialized forensic accounting course.

This investigation is important for several reasons. First, detecting fraud has become a high priority in the accounting profession �Elliott 2002; PCAOB 2004, 2007, 2010� and if the typical audit series is not providing future auditors with the skills and characteristics necessary for today’s work environment, it is important to know whether adding a forensic accounting course will bring future auditors closer to the level of skill demanded. Second, almost 60 percent of Securities and Exchange Commission �SEC� enforcement actions against auditors between 1987 and 1997 were directly related to the failure of auditors’ professional skepticism �Beasley et al. 1999�. The Public Company Accounting Oversight Board �PCAOB�, in its auditor inspections, has also cited a lack of professional skepticism as a serious problem in auditors’ fraud investigations �PCAOB 2007, 2010�. In addition, Consideration of Fraud in a Financial Statement Audit, SAS No. 99, has reemphasized the need for auditors to exercise professional skepticism when considering the risk of material misstatement due to fraud, suggesting that increased skepticism should lead to im- proved risk assessments �AICPA 2002�. Professional skepticism has been defined in the literature as “indicated by auditor judgments and decisions that reflect a heightened assessment of the risk that an assertion is incorrect, conditional on the information available to the auditor” �Nelson 2009�. Our study directly examines risk assessments conditioned on information available to the individual; thus, this study can also provide needed information to standard setters and regulators.

Third, this study can inform accounting educators. Nieschwietz et al. �2000�, in their review of fraud literature, suggest that training may help with fraud detection but point out that there is virtually no research in this area. Further, the PCAOB suggests that training is an important element for improving auditors’ risk assessments �PCAOB 2010�. Thus, it is important to know whether this additional specialized training �beyond a typical audit sequence� helps to successfully prepare students to a level required by the current audit environment. If specific training in forensic accounting is effective in helping students more accurately assess fraud-risk �as compared to experts�, and this level persists over time, schools may wish to include such courses in their curriculum, and practitioners and firms might place a premium on students trained with this type of course.

In this paper, we examine four research questions. First, does providing a specialized course in forensic accounting improve a student’s ability to make fraud-risk assessments? Second, do students who have completed a forensic accounting course more appropriately incorporate fraud- risk factors into a risk assessment than students who have completed a typical audit sequence? Third, do students who have completed a forensic accounting course more accurately �as bench- marked by experts� assess the relevance of fraud-risk factors than students who have completed a typical audit sequence? Fourth, to what extent do these effects persist over time?

To examine these issues, we run a longitudinal study beginning with a cohort of 37 Master’s in Accounting �hereafter, MAcc� students who, during the course of this study, completed an audit course �described in the next section� that emphasizes forensic accounting tools. These students are asked to complete a short case at three points in time: �1� the first day of class before any

2 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

material is covered �denoted “pre-training”�; �2� the last day of class �denoted “post-training”�; and �3� seven months after completing the course �denoted “follow-up”�.1 All students in the cohort have previously completed a typical auditing series, consisting of two audit classes, prior to attending the forensic accounting course.2 The case completed by the students consists of a short description of a company, the company’s financial statements, and a statement that highlights bad debt expense as unusually low. We collect three dependent variables. First, we ask the students to read the introductory materials �which highlight the unusual bad debt expense�. Second, we ask students to provide their initial risk assessment. Third, we give them a list of 15 additional facts �some of which could be “red flags” of fraud� that are said to have come to light during their audit and ask them to rate the relevance of each fact. Finally, students are asked to provide a final risk assessment. We compare the final risk assessment to their initial risk assessment to determine whether the additional facts influenced them to revise their initial judgments.

To examine the incremental effect of the forensic accounting course on students’ fraud-related judgments, we compare �1� the pre-training results �i.e., collected on the first day of class� to �2� the post-training results �i.e., collected on the last day of class�. To examine the persistence of the training effects, we compare the post-training results to the follow-up results �i.e., the results collected seven months after completion of the course�. In addition, we compare the results from both the trained and untrained students to a benchmark provided by a panel of fraud experts.3 The panel of experts serves as a benchmark for the relevancy of fraud-risk factors and fraud-risk assessments.

We find that, post-training, students provide an initial fraud-risk assessment that is signifi- cantly higher than both the untrained students and themselves pre-training, but not significantly different from the panel of experts. After reviewing the additional fraud-risk factors, the untrained students provide a revised fraud-risk assessment that is significantly lower than the experts, while the trained students show no significant differences from the experts. We also find, in general, that when students evaluate a series of fraud-risk factors �i.e., red flags�, students post-training often rate these factors as more relevant than did a panel of experts, while untrained students generally rate these same factors as less relevant than did the experts. Finally, we find that these effects persist over time. Specifically, we find that the fraud judgments �initial risk assessments, relevancy assessments of fraud-risk factors, and revised risk assessments� collected in the follow-up ques- tionnaire seven months after completing the course were not significantly different from the post-training fraud judgments collected immediately upon completing the course. These results suggest that specific forensic accounting training helps students assess risk in a manner that is not significantly different from a panel of experts, while students without such training consistently underassess risk in the situation provided. More importantly, students trained specifically in fo- rensic accounting retain the ability to make more accurate risk assessments �benchmarked by the experts� long after the training is completed.

The remainder of this paper is organized as follows. The next section provides background and hypotheses development. The third section outlines the research method. The fourth and fifth sections provide the results and conclusions, respectively.

1 Thirty-seven students started the course and took the pre-test; 36 students completed the post-test; and 17 responded to the follow-up inquiry.

2 The cohort was taught by one professor with ten years experience in teaching. The students designated “untrained” were taught by two different professors �one for beginning audit, one for advanced audit�. The professors who taught the audit courses were not specialists in forensic accounting, rather were experienced audit teachers.

3 The panel of experts consisted of seven individuals including a partner at a regional audit firm specializing in fraud; two owners of forensic accounting firms; one forensic specialist at a Big 4 firm; one law enforcement officer specializing in white collar crimes; a chief investigator for the district attorney in a major city, who specialized in white collar crimes; and a state auditor. The panel members had an average of 22.7 years of experience and a combined 159 years of experience.

The Incremental Benefits of a Forensic Accounting Course 3

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

BACKGROUND AND HYPOTHESES DEVELOPMENT Background

We posit that a key educational goal of a forensic accounting course may be to immerse students in situations that heighten their awareness of fraud, thereby increasing their sensitivity to any red flags of fraud that may be present in an audit situation. The result should be that they approach any audit situation with a more-questioning mind or skepticism.4

A typical audit course sequence may introduce a student to past frauds as well as known red flags of fraud, but it is possible that simply knowing a list of red flags may have a limited usefulness in detecting future frauds because fraud is both firm- and situation-specific, and red flags have often been determined in retrospect. Section 404 of the Sarbanes-Oxley Act �SOX� underscores this point by requiring that each firm’s specific internal control weaknesses be dis- covered and alleviated because perpetrators are individuals who are under pressure and take advantage of the opportunities available to them in their specific firms. It is unreasonable to expect any audit course to prepare students for every situation they might encounter; thus, a forensic accounting course that enables students to see how perpetrators under pressure take advantage of firm-specific opportunities may be valuable. In addition, research suggests that knowledge of red flags of fraud or the use of typical decision aids that highlight risk factors have little or no effect on risk assessments �e.g., Pincus 1989; Eining et al. 1997; Asare and Wright 2004�. Thus, if knowledge of a list of red flags alone does not affect risk assessments, perhaps the combination of knowledge of red flags with additional training where students have had practice detecting fraud in a realistic setting will help them build a cognitive fraud model that will improve their ability to detect fraud �Johnson et al. 1993; Nieschwietz et al. 2000�.

The Classroom Experience Students in this study took a semester-long class in forensic accounting that emphasized fraud

detection and investigation techniques from a textbook and hands-on group projects. During the course, they studied in detail several actual frauds gleaned from SEC Accounting and Auditing Enforcement Releases �AAER�. For these AAER frauds, student teams performed financial state- ment analysis of the firms pre- and post-restatements and carefully studied the specific fraud- related details, including characteristics of the fraud perpetrators �i.e., the perpetrators’ financial pressures, rationalizations, and opportunities�. Students also completed a three-week-long problem-based learning case in which they received the books of a fictitious firm in which several frauds were embedded.5 Student teams had to detect the frauds, investigate them, and tie the crime back to a specific perpetrator. In addition, they had to show that the perpetrator had intent, pressure, opportunity, rationalization, and also had benefited from the crime. This case was an Internet-based simulation that encouraged students to investigate red-flag cues, request additional evidence, conduct analytical procedures, make risk assessments, and interview company personnel as well as confirm details with third parties. This case is rich and complex and simulates fraud investigations in practice.

4 While a more-questioning mind, as shown by higher initial fraud-risk assessments in the presence of fraud-risk factors, may detect more frauds, there is a trade-off to raising a student’s level of suspicion, which may be that unnecessary time and money is spent investigating red flags that do not prove to indicate fraud. Therefore, an essential part of a forensic accounting course should be teaching students to distinguish between an error and a fraud and tempering a student’s tendency to think every error is a fraud. The goal should be to help students to be alert to anomalies that should trigger a search for additional clues that might indicate fraud rather than error.

5 The auditing course in which the students participated was entitled “Forensic Accounting” and was a semester-long course that included lectures, student presentations, and case studies. One case study was a problem-based learning exercise �Durtschi 2003�. Problem-based learning advocates claim knowledge gained during the execution of these cases persists over time �Norman and Schmidt 1992�.

4 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

Hypotheses Development

Successful fraud detection requires that individuals consider the possibility that fraud exists, conduct procedures to find it, and finally, draw the proper conclusion based on the evidence they acquire. To accomplish these objectives, auditing standards require auditors, during the planning phase of the audit engagement, to assess a sufficient initial likelihood of fraud so that additional tests are conducted when appropriate. Nelson �2009� suggests such risk assessments are associated with auditors’ level of skepticism such that skeptical auditors will recognize and weigh the rel- evance of any additional fraud-risk factors they encounter during the audit and continually revise their risk assessments �AICPA 2002, 2003�.

Initial Risk Assessments

In the planning phase of an engagement, one should assess the risk of material misstatement at the assertion level and determine the procedures that are necessary based on that risk assessment �Messier et al. 2007�. In this initial assessment, one should consider both the risks that are inherent in the environment �e.g., complexity of transactions� and those that are related to the control environment �e.g., segregation of duties�. At this initial phase, early impressions are made about the client and based on those impressions, auditors assess an initial likelihood that fraud exists in the financial statements. Standard setters suggest that reminding auditors about the possibility of fraud through training is one way that auditors might improve their level of skepticism and thus, risk assessments �AICPA 2003�.

While one would expect a course in forensic accounting to affect a student’s initial risk assessment, to our knowledge, this has never been formally tested. In the first hypothesis, we test whether students who have completed the forensic accounting course outlined above provide a more accurate �as benchmarked by a panel of experts� initial risk assessment when confronted with a non-conforming account than students who have completed a traditional auditing sequence. Formally, we test the following hypothesis:6

H1: Post-training students will more accurately assess �as benchmarked by experts� initial fraud-risk than students who have not received training.

Revised Risk Assessments

Once the initial assessment is made, an auditor must be open to revising that assessment if additional risk factors come to light. The additional facts must be accurately synthesized and if appropriate, the initial risk assessment must be revised so that appropriate audit procedures are planned and performed �AICPA 2003�. Prior research has shown that auditors have difficulty with this synthesis. For example, Hackenbrack �1992� finds that auditors overly weight non-diagnostic evidence in their risk assessments; Hoffman and Patton �1997� find that accountability �i.e., hold- ing auditors accountable to their superiors� increases auditors’ risk assessments, but does not mitigate the fact that they overly weight irrelevant factors. As an extension, Glover �1997� finds that when auditors are faced with time pressure, the negative effect of non-diagnostic evidence on auditors’ risk assessments is lower but still persists. Further, both Knapp and Knapp �2001� and Carpenter �2007� find that while managers are effective at assessing the risk of fraud �i.e., as being higher when fraud is present than when it is not�, lower-level auditors struggle with making

6 While we cannot test the null �i.e., that there are no differences between the risk assessments of trained students and experts�, we can determine whether the risk assessments are significantly different from those of the experts. We state the hypotheses in the null form for clarity.

The Incremental Benefits of a Forensic Accounting Course 5

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

effective risk assessments. To our knowledge, there has been no empirical evidence on how trained students evaluate the relevance of fraud-risk factors or synthesize the factors into a revision of their initial risk assessment.

We would expect that taking a course in forensic accounting that included the detection and investigation of fraud in a realistic setting would lead to differences in how fraud-risk factors are processed and synthesized into revised risk assessments. Specifically, we expect that post-training students will more accurately �as benchmarked by a panel of experts� revise their initial risk assessment in the presence of fraud-risk factors than students who have only completed the typical audit sequence. Thus, we provide the following hypothesis, stated formally:

H2: Post-training students will more accurately revise �as benchmarked by experts� risk assessments than students who have not received training.

Evaluation of Fraud-Risk Factors The most basic purpose of fraud training is to build a foundation from which participants may

consider how and where financial statements might be susceptible to fraud by acquiring knowl- edge of a set of fraud-risk factors and understanding that the factors must be evaluated in the context of the specific firm. Standard setters recommend that forensic audit procedures specifically designed to identify risk factors be performed during the audit �Hogan et al. 2008�. Despite this, the PCAOB observed instances of auditors failing to respond appropriately to identifiable fraud- risk factors. In addition, the PCAOB found transactions that warranted further fraud-risk consid- eration for which there was no evidence that the auditors had considered any of the associated fraud-risk factors �PCAOB 2007�.

Researchers have questioned the value of simply knowing a set of red flags on fraud detec- tion. For example, Wilks and Zimbelman �2004� suggest that training auditors to evaluate fraud- risk cues with typical instructional devices �e.g., checklists, client questionnaires, etc.� may not be effective because these devices fail to engage the auditors in deeper, more-strategic reasoning. Pincus �1989� suggests that red flags are not actually diagnostic and conversely, are more likely to mislead the auditor. As stated previously, this may be the case because red-flag lists are created retrospectively and what may have been the precursor to fraud in one case may not apply in all instances. Thus, to determine the extent to which a forensic accounting course has taught students how to identify �relevant� red flags for a given situation may require comparing trained students �and those who have not received training� to a panel of experts. If trained students assign a level of relevance similar to that of a panel of experts and untrained students provide a level of relevance that is significantly different from the experts, we can conclude that training provides improved performance in the application of these fraud-risk factors. We expect that a course that focuses on forensic accounting should result in students with better fraud-risk factor evaluation skills than students in a typical audit sequence. Thus, post-training students should be better able to accurately assess the relevance of fraud-risk factors than those who have not received this training. Therefore, we propose the following hypothesis:

H3: Post-training students will more accurately �as benchmarked by experts� assess the rel- evance of fraud-risk factors than students who have not received training.

Persistence of These Effects Standard setters suggest that an auditor’s professional skepticism can be dulled over time

�AICPA 2003�. Therefore, if a forensic accounting course improves fraud judgments, it is impor- tant that these judgments are sustainable over time. To examine the persistence of the training effects, students must be tested again after time has passed. We posit that a semester-long course

6 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

that focuses on fraud cases, where students have had the opportunity to detect a series of frauds as well as investigate them in the context of a firm’s books, will have sufficient impact that any increase in a student’s sensitivity to the possibility of fraud will persist.7 Therefore, we expect that when participants are examined several months after their training, the three fraud judgments tested at the completion of their course �initial risk assessment, revised risk assessment, and relevance ratings of fraud-risk factors� will not be significantly different. Thus, we provide the following hypothesis, stated formally:

H4: Follow-up testing of students who took the forensic accounting course will show initial risk assessments, revised risk assessments, and fraud-risk factor evaluations that are not significantly different from their judgments provided post-training.

RESEARCH METHOD We examine the effects of a course in forensic accounting on students’ fraud judgments.

Specifically, we examine the students’ pre-training, post-training, and follow-up judgments and compare them to one another, as well as to a control group of students not enrolled in the course and to a panel of experts. Data were collected via a case-based questionnaire.

Participants Sixty-nine accounting students and seven experts participated in the study. There were 37

students �denoted “trained students”� who were enrolled in a university-provided forensic account- ing course and a control group of 32 students �denoted “untrained students”� who had completed a typical auditing sequence but had not yet enrolled in a forensic accounting course.8 Seventeen of the trained students completed the follow-up instrument seven months after completion of the course. None of the students in either group had any real-world audit experience and all were enrolled in comparable courses �e.g., upper-division undergraduate/graduate accounting courses�. The students in the forensic accounting course were all admitted into the MAcc program.

Procedure The trained students completed the case-based questionnaire on the first day of class prior to

any instruction �pre-training�, and again on the last day of class �post-training�. Seven months after the last day of class, these participants were asked again to complete the instrument via email for follow-up.9 The untrained students completed the instrument on the last day of their second audit course.

Case Materials and Dependent Measures The case materials had three parts: an initial risk assessment, a fraud-risk factor evaluation,

and a revised �i.e., final� risk assessment. Participants took approximately 45 minutes to complete all three parts.

7 The students in this course completed a problem-based learning case created specifically to let them experience the detection and investigation of fraud in a realistic setting. Evidence in the medical field suggests that students who have taken problem-based learning courses in the medical field have increased knowledge retention �Barrows and Tamblyn 1980; Blumberg and Michael 1991; Norman and Schmidt 1992�; thus, we posit that a problem-based learning case should have the same persistent effect in accounting. However, since we do not examine a forensic accounting course that is identical with the exception of that case, we cannot determine the effect of that case specifically on the persistence of skepticism. See Durtschi and Fullerton �2005� for a discussion of the problem-based learning method.

8 One student did not complete the instrument at the end of the course. Thus, we have 36 post-training students. 9 The case was not designed to test specific facts taught in the course, rather to assess the effect of the course on the way

students would assess risk before and after training.

The Incremental Benefits of a Forensic Accounting Course 7

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

In Part I, participants were asked to assume the role of auditors while reading information about a wholesale office supply company. The case materials included background information about the company �e.g., type of business, relevant employees, etc.� and company financial statements.10 The case also included a statement indicating that their supervisor considered the bad debt expense in the current year unusually low.11 Based only on the background information provided, the financial statements, and this single statement calling attention to that particular non-conforming account, participants were asked to assess the likelihood that there was an inten- tional misstatement using an 11-point Likert scale with the end points labeled “not at all likely” and “extremely likely.”12 Statistical tests of this hypothesis were conducted using t-tests of the difference in means, assuming unequal variances.13

Part II informed the participants that some additional facts had come to their attention during their audit of the firm. These were a set of 15 facts, many of which were indicative of fraud-risk factors as outlined in SAS No. 99, although their level of relevance to this particular case varied.14

Experimental factor analysis �Maximum Likelihood, Rotated-Promax� was done on the facts and 11 of the 15 facts loaded appropriately onto three factors.15 The factors can be described as following: �1� personal information about employees that were in a position to commit fraud that related to individual-level financial pressures or rationalizations to commit fraud; �2� answers to typical questions an accountant would ask if confronted with an unusual bad debt expense; and �3� facts relating to the firm’s economic environment. Table 1 provides a list, by factor, of each item; the factor eigenvalues; item loading in each factor; Cronbach’s alpha scores; and an explanation as to why they might or might not be relevant to the case. Four of the facts cross-loaded and are shown at the bottom of the table. Participants were asked to assess a level of relevance for each fact on an 11-point scale with end points labeled “not at all relevant” and “extremely relevant.”

In Part III, participants were asked for a second, revised risk assessment. Each participant provided an assessment of the likelihood that there was an intentional misstatement in the financial statements on an 11-point scale from 0 for “not at all likely” to 10 for “extremely likely.” The purpose of the second risk assessment was to evaluate how the participants had processed the additional facts provided in Part II into their revised risk assessment �i.e., whether the red flags encountered during the audit caused them to raise their risk assessment�.

RESULTS H1–H3 examine whether individuals who have received a course that focuses on forensic

accounting have more accurate �as compared to a panel of experts� fraud judgments than students

10 The financial statements in this case were adapted from Lindberg �1999�. 11 A pilot instrument was randomly assigned to several different test groups �including both students and auditors� to

control for any effects that might be due to the direction of the potential misstatement �i.e., bad debt expense that was said to be too high or too low�. No differences were predicted between these two groups a priori and there were no statistical differences noted during data analysis. Therefore, participants in this study all had instruments where the bad debt expense was low, as fraud would be more likely committed in this scenario.

12 Participants were also asked to assess the likelihood of error �labeled an unintentional misstatement� using a similar scale. Because our focus was on participants’ fraud judgments, this question was included so that participants were not sensitized to only the possibility of fraud.

13 Wilcoxon-Mann-Whitney exact tests for the difference in ranks were also conducted. Since we received the same inferences using non-parametric as the parametric tests, we report only the parametric results.

14 To control for order effects, the order of these 15 facts was randomly varied such that three versions of the experiment were used. No statistical differences were noted during data analysis between the three versions.

15 With four exceptions, factor analyses of items combined to create fraud-risk factors and responses that provided factor loadings in excess of 0.50 and Cronbach’s alpha levels exceeding the generally accepted threshold of 0.70 �Nunnally 1978�. For all combined measures, the percents of variances explained exceeded 60.0 percent. Cumulative proportions of the variance explained are: Factor 1 �62 percent�, Factor 2 �87 percent�, and Factor 3 �100 percent�. Tucker and Lewis Reliability quotient was 0.92; and the analysis rejected the hypothesis that there were no common factors �p � 0.001�, as well as supported the hypothesis that three factors were sufficient �p � 0.010�.

8 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

TABLE 1

Facts Participants Read That Came to Their Attention during the Course of Their Audit (Factor Loading for Each Factor) [Cronbach’s Alpha]

Panel A: Factor 1, Personal Information Relating to the Fraud Triangle (Pressure and Rationalization for Persons with Opportunity to Commit Fraud; Eigenvalue 11.01) Item as Written Why Relevant or Not Relevant

Dan’s secretary says Dan has become moody and withdrawn from his associates since his divorce a year ago �0.95� �0.82�

Could indicate financial pressure on an employee with opportunity to change financial statements

Jim feels overworked and underpaid �0.90� �0.82� Could indicate a rationalization for fraud Jim’s kids are both in college �0.64� �0.81� Financial pressure Dan is a devoted worker who has not taken a

vacation in the last year �0.56� �0.82� Fraud perpetrators often avoid vacations for fear the fraud will be uncovered in their absence

Panel B: Factor 2, Relevant Accounting Questions (Eigenvalue 4.49) Item as Written Why Relevant or Not Relevant

Review of the aged accounts receivable trial balance indicates that the amount and percentage of accounts receivable in each aging category were comparable to prior years �0.76� �0.82�

A change in ability to collect debt could lead to a change in bad debt expense

Credit-granting standards haven’t changed in the last year �0.60� �0.82�

A change in credit-granting standards could lead to a change in bad debt expense

There was no unexpected increase in sales from year to year �0.57� �0.82�

A change in sales could lead to a change in bad debt expense

The percentages used to estimate the uncollectible accounts were less than half in practically every aging category in the aged accounts receivable trial balance �0.47� �0.83�

A change in percentages used to estimate uncollectible accounts could change bad debt expense

Panel C: Factor 3, Firm-Level Economic Facts (Eigenvalue 2.28) Item as Written Why Relevant or Not Relevant

Office Inc. has steadily paid down their long-term debt at a rate of $100,000 per year �0.85� �0.82�

Indicates firm is not cash strapped

Office Inc. had changes in its accounting for depreciation from the straight-line method to the double-declining balance method �0.80� �0.84�

An accelerated depreciation method leads to higher depreciation expense, lower net income

In 1996, Office Inc. was audited by the Internal Revenue Service �0.51� �0.83�

Could indicate past financial issues

Panel D: Facts That Cross-Loaded on More Than One Factor Dan plays basketball during lunch When Office Inc. lost some old customers, Dan recruited new customers The market for office equipment has become less competitive in price There are rumors of a possible takeover by another company

Cumulative proportion of the variance explained is: Factor 1 �62 percent�, Factor 2 �87 percent�, and Factor 3 �100 percent�. Tucker and Lewis Reliability Quotient was 0.92, and the analysis rejected the hypothesis that there were no common factors �p � 0.001�, as well as supported the hypothesis that three factors were sufficient �p � 0.010�.

The Incremental Benefits of a Forensic Accounting Course 9

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

who have completed a traditional two-course audit sequence. To test these hypotheses, we first compare the results from students who have completed a forensic accounting course �post� and students who have completed a traditional audit course �untrained� to a panel of experts. To measure the effect of the course itself, we compare students on the first day of class �pre� to their results on the last day of class �post�. We collect and report the assessments of these groups to test the incremental impact of the forensic accounting course, over and above the typical audit se- quence on students’ fraud judgments.

H1 examines whether students who have completed a forensic accounting course have more accurate �as compared to experts� initial fraud-risk assessments, when examining financial state- ments that contain a non-conforming account �i.e., unusual bad debt expense�, than do students who have not had such a course. Results of H1 are displayed in Table 2. First, we compare students to the experts and find that students post-training �mean 7.11� are not significantly dif- ferent from the experts �mean 6.10, p � 0.220�. The untrained students report a mean initial fraud-risk assessment of 4.50, which is lower than the experts �p � 0.076�. Nelson �2009� posits that a higher risk assessment is an indication of a higher level of skepticism, so we can infer that untrained students are not as skeptical as the panel of experts, while the trained students are not significantly different from the experts in their level of skepticism. To further examine this pos- sibility, we compare the initial risk assessments of students who have only completed a typical audit course series �untrained—mean 4.50� with students who have completed a forensic account- ing course �trained-post—mean 7.11�. The students who have completed the forensic accounting course assign a significantly higher level of risk than do the students who have completed a typical auditing course sequence �p � 0.001�. In addition, students �trained-post—mean 7.11� have higher initial risk assessments after training than on their first day in the forensic accounting course �trained-pre—mean 5.21, p � 0.001�. These results provide support for the inference that training raises the student’s level of skepticism.

Overall, these results support H1 and suggest that a course in forensic accounting increases a student’s initial risk assessment when confronted with a non-conforming account over students who have not had such training and the risk assessment, post-training, is not significantly different from the panel of experts. If, as Nelson �2009� suggests, a higher risk assessment is indicative of a higher level of skepticism, these results also support the notion that a forensic accounting course raises a student’s level of skepticism.

H2 examines the effect of training on the synthesis of fraud-risk factors into risk assessments. Specifically, we examine whether individuals who have fraud training incorporate the presence of the fraud-risk factors into their risk assessment such that they have more accurate �as benchmarked by the experts� revised �i.e., final� risk assessments than those who have not received fraud training. As reported in Table 3, Panel A, we find that the revised risk assessments of students, post-training, are not significantly different from those of the panel of experts �p � 0.228�. However, the untrained students have a mean revised risk assessment that is significantly lower than the experts �p � 0.023�. We also find that the post-training students �mean 8.33� reported a significantly higher revised risk assessment than did the untrained students �mean 5.29, p � 0.001�. In addition, we find that post-training, the students had a significantly higher revised risk assessment than they had pre-training �mean 6.73, p � 0.001�. This provides support for H2 and suggests that when confronted with a non-conforming account and then additional fraud-risk factors, students who have received specific training in fraud better synthesize this information and revise their initial risk assessments upward �as benchmarked by the experts�.

Figure 1 shows the mean initial and revised risk assessments for each of the different groups. Shown graphically, the mean expert risk assessments fall between the trained and untrained stu- dents. If, as argued by the PCAOB, a higher degree of professional skepticism is essential to detecting fraud �PCAOB 2007, 2010�, trained students, with their higher fraud-risk assessments,

10 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

TABLE 2

Tests of H1 and H4 The Effect of Fraud Training on Initial Risk Assessments

Panel A: Mean (Standard Deviation) of Initial Assessments of Risk for Trained and Untrained Studentsa,c

Untrained

Students

Expert PanelPre-Training Post-Training Follow-Up

4.50 5.21 7.11 7.05 6.10 �2.35� �2.05� �2.35� �2.01� �1.85�

n � 32 n � 37 n � 36 n � 17 n � 7

Panel B: Results of t-tests Comparing Means of Initial Assessments of Risk Comparisons between Groups Hypothesis Tested t-statistic p-valueb

Pre versus Post H1 3.99 0.000 Untrained versus Post* H1 4.89 0.000 Untrained versus Pre — 1.33 0.187 Untrained versus Expert Panel — 1.96 0.075 Untrained versus Follow-Up — 3.98 0.000 Pre versus Follow-Up — 3.10 0.002 Post versus Follow-Up H4 0.09 0.465 Post versus Expert Panel Additional test—H1 1.30 0.220 Follow-Up versus Expert Panel Additional test—H4 1.12 0.283

* These tests were also run where the post scores were normalized by the difference between untrained students and students taking the pre-test to remove the difference between undergraduate and MAcc students. When the post data was normalized, the t-statistic for untrained versus post was 3.56 �p-value � 0.000� and the t-statistic for untrained versus follow-up was 2.88 �p-value � 0.003�. a Descriptive statistics for participants’ initial assessments of the likelihood that the highlighted area of concern �misstatement of bad debt expense� was intentional rather than

unintentional on an 11-point Likert scale with endpoints labeled 0, not at all likely and 10, extremely likely. b p-values for tests of hypotheses are two-tailed. c Untrained students have completed a typical auditing series consisting of two audit courses. Pre-training students have completed the two-course auditing series and have now

enrolled in a forensic accounting course. Post-training students have just completed the forensic accounting course. Follow-up are these same students, seven months later.

T he

Increm entalB

enefits ofa

Forensic A

ccounting C

ourse 11

Issues in

A ccounting

E ducation

Volum e

26,N o.1,2011

A m

erican A

ccounting A

ssociation

TABLE 3

Tests of H2 and H4 The Effect of Fraud Training on Revised Risk Assessments

Panel A: Mean (Standard Deviation) of Revised Assessments of Risk for Trained and Untrained Studentsa

Untrained

Trained Students

Expert PanelPre-Training Post-Training Follow-Up

5.29 6.73 8.33 8.58 7.40 �2.54� �1.95� �1.41� �1.62� �1.79�

n � 32 n � 37 n � 36 n � 17 n � 7

Panel B: Results of t-tests Comparing Means of Revised Assessments of Risk Comparisons between Groups Hypothesis Tested t-statistic p-valueb

Untrained versus Post* H2 5.97 0.000 Pre versus Post H2 4.03 0.000 Untrained versus Pre — 2.58 0.012 Untrained versus Follow-Up — 5.50 0.001 Untrained versus Expert — 2.59 0.023 Pre versus Follow-Up — 3.66 0.000 Post versus Follow-Up H4 0.56 0.291 Post versus Expert Panel Additional test—H2 1.30 0.228 Follow-Up versus Expert Additional test—H4 1.52 0.160

* These tests were also run where the post scores were normalized by the difference between untrained students and students taking the pre-test to remove the difference between undergraduate and MAcc students. When the post data was normalized, the t-statistic for untrained versus post was 3.14 �p-value � 0.001� and the t-statistic for untrained versus follow-up was 3.09 �p-value � 0.002�. a Descriptive statistics for participants’ revised assessments of the likelihood that the highlighted area of concern �misstatement of bad debt expense� was intentional rather than

unintentional on an 11-point Likert scale with endpoints labeled 0, not at all likely and 10, extremely likely. b p-values for tests of hypotheses are two-tailed.

12 C

arpenter,D urtschi,and

G aynor

Issues in

A ccounting

E ducation

Volum e

26,N o.1,2011

A m

erican A

ccounting A

ssociation

indicative of a higher level of skepticism, are more likely than untrained students to be open to the possibility of fraud when there are indications that fraud might be present.

To control for each participant’s initial risk assessments, we report the results of a repeated- measure ANOVA in Table 4. These results show a significant main effect within participants for the revised risk assessments �p � 0.001� in the comparison of pre-trained to post-trained students and post-trained students to untrained students, respectively. In addition, we find a significant main effect for training �p � 0.001�. These results provide additional support for H2.

Both H1 and H2 reflect the desired result of a course in forensic accounting such that students make more accurate �as compared to experts� fraud-risk assessments than students who have not attended such a class. H3 explores possible reasons for this improvement in performance by the trained students by looking at the individual fraud-risk factors the instrument presented to the students as having “come to their attention during the course of the audit.”

H3 examines the effect of fraud training on assigning relevance to fraud-risk factors. We examine whether individuals who have received fraud training assign a level of relevance that more closely reflects the level of relevance assigned by a panel of experts than do individuals who have received no fraud training. Table 5 shows the results of H3.

FIGURE 1 Initial and Revised Fraud-Risk Assessments

3.00

3.50

4.00

4.50

5.00

5.50

6.00

6.50

7.00

7.50

8.00

8.50

9.00

9.50

10.00

Initial Assessment Revised Assessment

L ik el ih oo d A ss es sm en ts

(post) 7.11 (follow-up )7.06

(untrained)5.29

(pre) 6.73

(follow-up)8.59 (post)8.33

(pre) 5.22

( untrained)4.50

(Expert) 6.10

(Expert)7.40

Figure 1 shows the change from initial to revised risk assessments provided by experts and students untrained; pre-training; post-training; and follow-up, seven months after completion of the forensic accounting course.

The Incremental Benefits of a Forensic Accounting Course 13

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

TABLE 4

Tests of H2 The Effect of Fraud Training on Revised Risk Assessments Controlling for Initial Risk Assessments

Panel A: Results of a Repeated-Measures ANOVA of Training Between-Participants on the Revision to the Risk Assessments for Trained Students (Pre- and Post-Training)a

Source of Variation df SS MS F-statistic p-value

Between-Participants Training 1 111.66 111.66 18.09 0.000 Error 71 438.39 6.18

Within-Participants Revision 1 68.28 68.28 79.82 0.000 Revision � Training 1 0.77 0.77 0.91 0.345 Error 71 60.73 0.86

Panel B: Results of a Repeated-Measures ANOVA of Training Between-Participants on the Revision to the Risk Assessments for Trained (Post-Training) and Untrained Studentsa

Source of Variation df SS MS F-statistic p-value

Between-Participants Training 1 270.17 270.17 37.75 0.000 Error 66 472.41 7.16

Within-Participants Revision 1 34.53 34.53 20.52 0.000 Revision � Training 1 1.53 1.53 0.91 0.343 Error 66 111.08 1.68

a In Panel A, the between-participants’ variation is measured between the pre-trained participants versus the post-trained participants. In Panel B, the between-participants’ variation is measured between the trained students, post-training, and the untrained students. In both Panels A and B, the within-participants’ variation is measured as the initial versus revised risk assessment for each participant.

14 C

arpenter,D urtschi,and

G aynor

Issues in

A ccounting

E ducation

Volum e

26,N o.1,2011

A m

erican A

ccounting A

ssociation

TABLE 5

The Effect of Fraud Training on the Relevancy Ratings of Fraud-Risk Factors

Panel A: Factor 1, Personal Information Relating to Fraud Triangle (Financial Pressure and Rationalization in Persons with the Opportunity to Commit Fraud)

Expert Panel Untrained Pre Post Follow-Up

Mean �loaded items� 7.32 4.88 6.60 8.47 7.87 Mean �factor score� 0.1010 �0.9021 �0.1261 0.7016 0.4452 Standard Deviation �factor

score� 0.1009 0.7820 0.9193 0.5527 0.6402

Number of Observations 7 32 37 36 17 Comparisons between Groups Hypothesis Tested t-statistic p-value

Expert versus Untrained H3 �3.96 0.002 Expert versus Post H3 2.60 0.031 Untrained versus Post H3 �9.66 �0.001 Pre versus Post H3 �4.68 �0.001 Post versus Follow-Up H4 1.42 0.167 Expert versus Follow-Up H4 1.31 0.213

Panel B: Factor 2, Relevant Accounting Questions Expert Panel Untrained Pre Post Follow-Up

Mean �loaded items� 7.89 6.14 7.24 8.27 8.02 Mean �factor score� 0.2345 �0.7919 �0.0935 0.5455 0.4423 Standard Deviation �factor

score� 0.7820 0.4853 0.9347 0.5808 0.6423

Number of Observations 7 32 37 36 17 Comparisons between Groups Hypothesis Tested t-statistic p-value

Expert versus Untrained H3 �4.67 0.001 Expert versus Post H3 1.50 0.166 Untrained versus Post H3 �8.64 �0.001 Pre versus Post H3 �3.52 0.001

(continued on next page)

T he

Increm entalB

enefits ofa

Forensic A

ccounting C

ourse 15

Issues in

A ccounting

E ducation

Volum e

26,N o.1,2011

A m

erican A

ccounting A

ssociation

Comparisons between Groups Hypothesis Tested t-statistic p-value

Post versus Follow-Up H4 0.56 0.578 Expert versus

Follow-Up H4 0.86 0.402

Panel C: Factor 3, Firm-Level Economic Facts Expert Panel Untrained Pre Post Follow-Up

Mean �loaded items� 2.14 3.61 3.89 5.36 4.25 Mean �factor score� �0.7389 �0.4043 �0.0863 0.5183 0.1555 Standard Deviation �factor

score� 0.7158 0.7485 0.9592 0.9149 0.5276

Number of Observations 7 32 37 36 17 Comparisons between Groups Hypothesis Tested t-statistic p-value

Expert versus Untrained H3 1.11 0.295 Expert versus Post H3 4.05 0.002 Untrained versus Post H3 �4.57 �0.001 Pre versus Post H3 �2.76 0.007 Post versus Follow-Up H4 1.82 0.075 Expert versus Follow-Up H4 2.99 �0.001

Results shown in Table 5 are from t-tests of two samples with unequal variances. Probabilities shown are for two-tailed tests. The tests are conducted on the factor scores created by SAS proc score and are calculated by weighting each variable with values from the rotated factor pattern matrix. These scores are used like regression coefficients to weight the actual responses given by participants. We also show the mean for each factor for ease of comparison; these means were created only from the facts that loaded onto a factor; thus, the four items that loaded on multiple constructs are not included in the means, while they are �appropriately weighted� included in the probabilities of differences.

16 C

arpenter,D urtschi,and

G aynor

Issues in

A ccounting

E ducation

Volum e

26,N o.1,2011

A m

erican A

ccounting A

ssociation

Table 5, Panel A, reports the relevance ratings for Factor 1, “personal information relating to the fraud triangle �financial pressure and rationalization in persons with the opportunity to commit fraud�.” We have included on Table 5 the mean scores for each factor as well as the factor score.16

The results in Panel A show that the mean rating for experts was 7.32, while untrained students provided an average relevancy rating for this group of facts that was significantly lower �mean � 4.88, p � 0.002� than the mean rating of the experts. Post-training, students had a mean rating for these red flags of 8.47, which is significantly higher than the mean rating of the experts �p � 0.031�. Post-training students also assigned significantly more relevance to these factors than did untrained students �p � 0.001� or themselves, pre-training �mean 6.60, p � 0.001�. Thus, the results from this factor are mixed. In summary, the forensic accounting course influences students to assign a higher level of relevance to fraud-risk factors that are indicators of the existence of the fraud triangle as related to employees with an opportunity to commit fraud than they would have assigned had they not taken such a course and they ascribe more relevance to these fraud indica- tors than do the panel of experts.

In Panel B we examine the relevancy rating for Factor 2, “typical accounting questions.” These facts �as seen in Table 1� are answers to questions a typical auditor should ask when confronted by an unusual bad debt expense. The items in this factor dismiss benign causes for the change in bad debt expense �for example, credit-granting standards have not changed, etc.�; as such, we expect them to be relevant to the case. We find that untrained students find these facts significantly less relevant than the experts do �p � 0.001�, while post-training students provide a relevancy rating that is not significantly different from experts �p � 0.166�. The effect of the extra training on the relevancy ranking can be seen by comparing both untrained students and pre- training students to post-training students. Post-training students provided significantly higher relevancy rankings than did the untrained students �p � 0.001� and themselves, pre-training �p � 0.001�. Thus, a class in forensic accounting appears to influence students to better recognize the relevance of questions an auditor should typically ask when confronted with an unusually small bad debt expense. Further, the trained student’s ability to recognize the relevance of these facts is not significantly different from that of a panel of experts.

Panel C contains the results for Factor 3, “firm-level economic facts.” Collectively, these facts were constructed to suggest that pressure to commit fraud from within the firm did not exist. For example, the fact that the firm had rapidly paid down debt indicates that the firm did not have a cash shortage. These facts could be interpreted as meaning that if there is fraud within the financial statements, it was done by pressure on an individual with opportunity rather than because of financial pressure within the firm. The experts placed a low relevancy ranking on these variables �mean 2.14�, and all student groups ranked these facts as more relevant than the experts. The untrained students, while giving a number higher than the experts, were not statistically different �mean 3.61, p � 0.295�. Students, post-training, reported a relevancy ranking that was statistically higher than the experts �mean 4.25, p � 0.002�. Training appears to have had the effect of raising the relevancy ranking for these items. For example, when we compare students post-training to students pre-training, we find that students post-training provided significantly higher relevancy rankings than they did pre-training �p � 0.001�. We find the same result when comparing the student post-training to the untrained students �p � 0.001�. Thus, a course in forensic accounting appears to influence students to consider firm-level economic factors to be more relevant than do a panel of experts.

16 The factor scores are used in the statistical analysis and work like regression coefficients, where the relevancy rating given by the participants is multiplied by this score in computing the probability of significant differences between two groups. Means rather than factor scores are often used for these comparisons; however, using factor scores is more appropriate as all responses are accurately weighted within each factor �Suhr 2009�. �See Introduction to SAS �UCLA: Academic Technology Services 2007��.

The Incremental Benefits of a Forensic Accounting Course 17

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

Overall, the results for H3, that training will cause students to more accurately assess fraud- risk factors, are mixed. We find that students post-training gave the facts a significantly higher relevancy rating than did the panel of experts for two of the three factors examined. However, the untrained students gave relevancy rankings that were lower than the panel of experts for two of three factors. While one might conclude that the trained students are too sensitized to fraud-risk factors, one would also have to conclude that untrained students are not sensitized enough. The downside to high sensitivity to fraud-risk factors is that time and effort may be wasted during an audit looking for fraud where there is none. The downside to not being sensitive enough to fraud-risk factors is that auditors will fail to look for fraud where it might exist. The case can be made that recent audit failures, as well as the current legislative and litigation environment, make it less costly to be more sensitive to the risk of fraud.

H4 investigates the long-term effects of a course in forensic accounting. Specifically, we predict that students who have completed a course in forensic accounting and are subsequently examined months after their training was completed will provide fraud judgments that have not degenerated significantly from the judgments they provided at the conclusion of their classroom experience. Specifically, we examine whether the students’ initial risk assessments, relevancy ratings of fraud-risk factors, and ability to incorporate the additional evidence that fraud might be present into a revision of their initial risk assessment are significantly different from the judgments they provided on the last day of class.

For this analysis, the trained student participants in this study were contacted seven months after their completion of the course and asked to complete the same case materials. Of the 37 students in the class, 17 trained students responded.17 As reported in Table 2, Panel A, post- training students provided a mean initial risk assessment of 7.11 when confronted with the un- usually small bad debt expense. These same students, in response to the follow-up questionnaire, provided a mean initial risk assessment of 7.05. This difference is not significant �p � 0.495�. In addition, this number is not statistically different from the assessment provided by the panel of experts �p � 0.283�. Also, as reported in Table 3, Panel A, post-training students provided a mean revised risk assessment of 8.33. The same students, in response to the follow-up questionnaire, provided a mean revised risk assessment of 8.58. This difference was not significant �p � 0.291�. In addition, this response, like the response to the initial risk assessment, was not significantly different from the panel of experts �p � 0.160�. Both the initial risk assessments and the revised risk assessments for the follow-up students were significantly higher than the untrained students and themselves pre-training. Taken together, these indicators provide evidence that the effect of this training persists.

Next, we examine whether the students’ relevancy ratings for the fraud-risk factors have been sustained. As reported in Table 5, there is not a significant difference between the students post- training and the students in the follow-up for any of the three factors �Factor 1, p � 0.167; Factor 2, p � 0.578; Factor 3, p � 0.075�. Furthermore, the students in the follow-up case have responses that differed significantly from the panel of experts in only one of the three factors �firm-level economic facts, p � 0.001�, as compared to two of three factors for students post-training; there is an improvement for the trained students.

17 While the implications of this follow-up investigation are limited because we cannot control for those participants that did not return the questionnaire, the results of this follow-up analysis provide longitudinal data that is informative about the persistence of this training after the training has passed. Also, since the instrument was originally given anony- mously, we cannot compare the follow-up instruments to the original instrument, rather we compare the follow-up to the entire post-training group. This is especially important as standard setters suggest that “auditors’ sensitivity to the existence of fraud possibly could be dulled over time” �AICPA 2003, 23�.

18 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

Collectively, these results support H4 and suggest that there is evidence of persistence of the increased fraud awareness that results from a course in forensic accounting. Trained students, even seven months after the course was completed, gave higher initial and revised fraud-risk assess- ments in the presence of fraud-risk factors than did untrained students, implying a higher level of skepticism that persists over time. Further, the results suggest improvement in trained students’ relevancy rankings for fraud-risk factors. While the results persist, importantly, they moderate somewhat over time to be more in line with the perceptions of the panel of experts.

CONCLUSION In this paper, we compare the performance of students who take a course in forensic account-

ing to a control group of untrained students �i.e., students who have just completed a traditional two-course auditing series, but are not enrolled in a forensic accounting course� and to a panel of fraud experts. We also compare the student responses over time—the first day of the course, the last day of the course, and seven months later. We find that when confronted with an unusual bad debt expense, post-training students reported a significantly higher initial likelihood that the ac- count was intentionally misstated than did their untrained counterparts. Further, their assessment does not differ significantly from that of a panel of experts. This provides some evidence that this training sensitizes them to the presence of fraud and implies that their level of skepticism is higher post-training. We find, in general, that when confronted with a series of fraud-risk factors �i.e., red flags�, post-training students tend to rate these facts as more relevant than do a panel of experts, while untrained students see them as less relevant. Our results also suggest that post-training, when confronted by various fraud-risk factors, trained students’ revised risk assessments are sig- nificantly higher than their initial risk assessments. In addition, their revised risk assessments are significantly higher than their untrained counterparts, even after controlling for all participants’ initial risk assessments. We also find that after incorporation of the risk factors in their risk assessments, post-training students’ revised assessments were not significantly different from those of a panel of experts. This provides some evidence that while they found the red flags more relevant than did the experts, this did not translate into revised risk assessments that are too high �as benchmarked by the experts�. We believe this provides some comfort that while training obviously increases the student’s sensitivity to fraud-risk factors, it may not adversely affect their estimate of the amount of risk on an audit. Finally, we provide evidence that the results of the trained students are sustained seven months after their training and improve as they relate to fraud-risk factors, suggesting that a course that emphasizes forensic accounting will result in related fraud judgments that are sustainable and tend to come closer to those of experts over time.

The implications of these results are important to practitioners, standard setters, and account- ing researchers and educators because SAS No. 99 requires auditors to document the fraud-risks identified during an audit and to perform audit procedures in response to their fraud-risk assess- ments �AICPA 2002�. Additionally, accounting firms and universities are investing considerable resources in related fraud training. In this study, we find that a course that focuses on forensic accounting raises students’ sensitivity to fraud-risk factors; however, when synthesizing the pres- ence of these fraud-risk factors, trained students do not overcompensate in their revised risk assessment. In light of the heavy costs of fraud to the profession �Bonner et al. 1998�, these results could be of particular interest to firms for their own fraud training programs as well as to univer- sities that are teaching or considering offering forensic accounting and/or fraud examination courses as part of their accounting curriculum. Further, this study answers the call for research aimed at providing insights on the effects of training and experience on fraud detection �Nieschwi- etz et al. 2000; Wilks and Zimbelman 2004; Hogan et al. 2008�.

We acknowledge the limitations of experimental work, in general, and those particular to this study. Any study that takes place in a classroom and over the course of ten months has many

The Incremental Benefits of a Forensic Accounting Course 19

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

uncontrolled variables and influences. However, when trying to learn what works and what does not in that classroom setting, we hope that a study conducted in an actual classroom has as much to contribute to others in the classroom as a more tightly controlled experiment. Additionally, our results are limited by the effectiveness of the questionnaire. Prior research suggests that the reason more research has not been done in aiding auditors through training of fraud detection skills is because of the practical problems of using practicing auditors �Wilks and Zimbelman 2004�. While our study uses accounting graduate students, as others have done before �Bloomfield 1997; Zim- belman and Waller 1999�, we believe our study is the first to provide evidence that giving students a specialized course in forensic accounting will positively impact their ability to make informed fraud-risk assessments beyond a typical audit series and that this ability will persist over time.

REFERENCES

American Institute of Certified Public Accountants �AICPA�. 2002. Consideration of Fraud in a Financial Statement Audit. Statement on Auditing Standards No. 99. New York, NY: AICPA.

——–. 2003. AICPA Practice Aid Series, Fraud Detection in a GAAS Audit: SAS No. 99 Implementation Guide. New York, NY: AICPA.

Asare, S., and A. Wright. 2004. The effectiveness of alternative risk assessment and program planning tools in a fraud setting. Contemporary Accounting Research 21 �2�: 325–352.

Barrows, H. S., and R. M. Tamblyn. 1980. Problem-Based Learning. New York, NY: Springer. Beasley, M., J. Carcello, and D. Hermanson. 1999. Fraudulent Financial Reporting: 1987–1997: An Analysis

of U.S. Public Companies. New York, NY: COSO. Bloomfield, R. J. 1997. Strategic dependence and the assessment of fraud-risk: A laboratory study. The

Accounting Review 72 �4�: 517–538. Blumberg, P., and J. A. Michael. 1992. The development of self-directed learning behaviors in a partially

teacher-centered problem-based learning curriculum. Journal of Teaching and Learning in Medicine 4 �1�: 3–8.

Bonner, S., Z. V. Palmrose, and S. Young. 1998. Fraud type and auditor litigation: An analysis of SEC accounting and auditing enforcement releases. The Accounting Review 73 �4�: 503–532.

Carpenter, T. D. 2007. Audit team brainstorming, fraud-risk identification, and fraud-risk assessment: Impli- cations of SAS No. 99. The Accounting Review 82 �5�: 1119–1140.

Durtschi, C. 2003. The Tallahassee BeanCounters: A problem-based learning case in forensic auditing. Issues in Accounting Education 18 �2�: 137–173.

——–, and R. R. Fullerton. 2005. Teaching fraud detection skills: A problem-based learning approach. Journal of Forensic Accounting VI �1�: 187–212.

Eining, M. E., D. R. Jones, and J. K. Loebbecke. 1997. Reliance on decision aids: An examination of auditors’ assessment of management fraud. Auditing: A Journal of Practice & Theory 16 �2�: 1–19.

Elliott, R. 2002. Twenty-first century assurance. Auditing: A Journal of Practice & Theory 21 �1�: 139–146. Glover, S. M. 1997. The influence of time pressure and accountability on auditors’ processing of non-

diagnostic information. Journal of Accounting Research 35 �2�: 213–226. Hackenbrack, K. 1992. Implications of seemingly irrelevant evidence in audit judgment. Journal of Account-

ing Research 30 �1�: 126–136. Hoffman, V. B., and J. M. Patton. 1997. Accountability, the dilution effect, and conservatism in auditors’

fraud judgments. Journal of Accounting Research 35 �2�: 227–237. Hogan, C., Z. Rezaee, R. Riley, and U. Velury. 2008. Financial statement fraud: Insights from the academic

literature. Auditing: A Journal of Practice and Theory 27 �2�: 231–252. Johnson, P. E., S. Grazioli, and K. Jamal. 1993. Fraud detection: Intentionality and deception in cognition.

Accounting, Organizations and Society 18 �5�: 467–488. Knapp, C., and M. Knapp. 2001. The effects of experience and explicit fraud-risk assessment in detecting

fraud with analytical procedures. Accounting, Organizations and Society 26 �1�: 25–37. Lindberg, D. L. 1999. Instructional case: Lakeview Lumber, Inc.: A study of auditing issues related to fraud,

materiality and professional judgment. Issues in Accounting Education 14 �3�: 497–515.

20 Carpenter, Durtschi, and Gaynor

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

Messier, W. F., S. M. Glover, and D. F. Prawitt. 2007. Auditing & Assurance Services: A Systematic Ap- proach. New York, NY: McGraw-Hill.

Nelson, M. 2009. A model and literature review of professional skepticism in auditing. Auditing: A Journal of Practice & Theory 28 �2�: 1–34.

Nieschwietz, R., J. Schultz, and M. Zimbelman. 2000. Empirical research on external auditors’ detection of financial statement fraud. Journal of Accounting Literature 19: 190–246.

Norman, G. R., and H. G. Schmidt. 1992. The psychological basis of problem-based learning: A review of evidence. Academic Medicine 67 �9�: 557–565.

Nunnally, J. D. 1978. Psychometric Theory. New York, NY: McGraw-Hill. Pincus, K. 1989. The efficacy of a red flags questionnaire for assessing the possibility of fraud. Accounting,

Organizations and Society 14 �1–2�: 153–163. Public Company Accounting Oversight Board �PCAOB�. 2004. Financial Fraud. Proceedings of the Stand-

ing Advisory Group Meeting, Washington, D.C., September 8–9. Available at: http://pcaobus.org/ News/Events/Documents/09082004_SAGMeeting/Fraud.pdf.

——–. 2007. Observations of Auditors’ Implementation of PCAOB Standards Relating to Auditors’ Respon- sibilities with Respect to Fraud. Release No. 2007-01. Washington, D.C.: PCAOB.

——–. 2010. Identifying and Assessing Risks of Material Misstatement. Auditing Standard No. 12. Washing- ton, D.C.: PCAOB.

Suhr, D. D. 2009. Exploratory or confirmatory factor analysis? Working paper, University of Northern Colorado. Available at: http://www2.sas.com/proceedings/sugi31/200-31.pdf.

UCLA: Academic Technology Services. 2007. Introduction to SAS. Los Angeles, CA.: UCLA: Academic Technology Services, Statistical Consulting Group. Available at: http://www.ats.ucla.edu/stat/sas/ library/factor_ut.htm.

Wilks, T., and M. Zimbelman. 2004. Using game theory and strategic reasoning concepts to prevent and detect fraud. Accounting Horizons 18 �3�: 173–184.

Zimbelman, M., and W. Waller. 1999. An experimental investigation of auditor-auditee interaction under ambiguity. Journal of Accounting Research 37 �Supplement�: 135–155.

The Incremental Benefits of a Forensic Accounting Course 21

Issues in Accounting Education Volume 26, No. 1, 2011 American Accounting Association

Copyright of Issues in Accounting Education is the property of American Accounting Association and its

content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s

express written permission. However, users may print, download, or email articles for individual use.


Comments are closed.