Measuring performance in the third sector

Measuring performance in the third sector

Carolyn Cordery School of Accounting and Commercial Law, Victoria University,

Wellington, New Zealand, and

Rowena Sinclair School of Business and Law, AUT University, Auckland, New Zealand


Purpose – The purpose of this paper is to set the scene for this special issue by synthesising the vast array of literature on performance measurement to examine what constitutes performance measurement, and why it is important for the third sector. It also analyses key issues of performance measurement and introduces the papers that comprise this special issue of Qualitative Research in Accounting & Management.

Design/methodology/approach – This paper takes the form of a literature review. The authors draw on extensive research on performance measurement from a diverse range of disciplines to identify and explore key definitions, opportunities and challenges with performance measurement in the third sector.

Findings – Economic/financial efficiency approaches, programme theories, strategic and participatory approaches all present opportunities and challenges when measuring performance in the third sector. The papers in this special issue demonstrate the manner in which different organisations have dealt with these.

Research limitations/implications – This special issue of Qualitative Research in Accounting & Management aims to stimulate qualitative research into performance measurement frameworks within the third sector both inside organisations and to their external stakeholders (supporters, clients and the general public).

Practical implications – Those charged with governance and management in the third sector organisations (TSOs) will seek to use appropriate approaches to measuring and managing performance in order to learn and to discharge accountability. The different aspects of performance measurement will also be of interest to funders, donors, and those who seek accountability from TSOs.

Originality/value – The categorisations of methods and approaches to performance measurement should guide researchers and practitioners alike. A future research programme is also derived.

Keywords Performance measurement, Performance management, Third sector, Charities, Not-for-profit

Paper type Literature review

1. Introduction The third sector is diverse and has a major economic presence in countries throughout the world (Johns Hopkins Institute for Policy Studies, 2003). Third sector organisations (TSOs) are increasingly a focus of policy makers, who seek for ways to improve service quality and reduce costs, and thus, reduce the size of government ( Johns Hopkins Institute for Policy Studies, 2003; Salamon, 2010). TSOs include non-governmental organisations (NGOs), social enterprises, charities, public benefit entities, voluntary organisations, donee organisations, not-for-profit organisations, membership organisations (for example, co-operatives, sports and arts clubs) and professional associations. Salamon (2010) identifies TSOs as nonprofit distributing, independent

The current issue and full text archive of this journal is available at

Qualitative Research in Accounting & Management Vol. 10 No. 3/4, 2013 pp. 196-212 q Emerald Group Publishing Limited 1176-6093 DOI 10.1108/QRAM-03-2013-0014

QRAM 10,3/4


from government, self-governing and organisations in which volunteers comprise an important staff resource.

Due to TSOs’ rapid increase in influence and their reliance on third party funding, interest into how TSOs measure and manage performance has intensified. The academic literature is dominated by conceptual papers and quantitative studies into performance measurement and management, hence there is a need for empirical studies on the implementation of TSOs’ performance measurement, management and reporting (Cairns et al., 2005; Ebrahim and Rangan, 2010; Lecy et al., 2012; Wimbush, 2009). This special issue of Qualitative Research in Accounting &Management seeks to stimulate debate based on qualitative research into the use of performance frameworks both within TSOs and also between TSOs and their external stakeholders (for example, resource providers including donors and grant-makers, volunteers and supporters, clients/beneficiaries and the general public).

This introduction to the special issue takes the form of a literature review. We draw on research into performance measurement from a diverse range of disciplines to identify and explore key definitions of performance measurement and the opportunities and challenges that performance measurement and management brings in TSOs. Due to the range of organisations in the third sector, this article draws on academic literature from organisations across the range of nomenclature and also introduces the contributions to this special issue.

The paper is organised as follows: first, we consider the unique features of performance measurement and management in the third sector, including the arguments for and against performance measurement. Terms used in performance measurement and management are defined in Section 3, followed by a categorisation of the approaches used. In the discussion and conclusion, a future research agenda is presented.

2. What are the unique features of performance measurement in the third sector? 2.1 Definitions and opportunities Payer-Langthaler and Hiebl (2013) note that performance can be defined as “intentional action” and therefore performance measurement is an assessment of the results from intentional action. Performance measurement for the business (first) sector focuses on value creation, that is, intentionally creating money for a firm’s stakeholders, particularly its owners (Munir et al., 2011, 2013; Nicholls, 2009). This relationship does not exist in TSOs where the resource providers are mainly donors and philanthropic funders who do not typically have an ownership interest. Further, even when an ownership interest exists, the limitation to distributing profit means that the resource providers cannot share in any monetary value created. While members may receive value commensurate with their subscriptions to membership organisations (for example, to sports clubs), it is likely that they will also contribute volunteer effort to the public good of the club and therefore create more value for others to enjoy. In other TSOs, the resource providers (for example, donors and philanthropic funders) also do not receive benefits commensurate with the value of their donations. Instead, the TSO’s services are provided to third parties (including, for example, indigent beneficiaries, aged care recipients or the environment).

Measuring performance in the third sector


As monetary value creation for owners is not a relevant measure for TSOs, these organisations are encouraged to measure and manage their performance in pursuit of their non-financial mission. Performance measurement and management serves two main purposes for a TSO: to prove its worth (to resource providers and to service recipients) and, through reporting internally, to improve organizational performance by learning from evaluation of its programmes or services and from comparison to others (Huang and Hooper, 2011; Saj, 2013).

In respect of proving their worth, Connolly and Hyndman (2004) argue that TSOs in the UK must justify their existence. They consider that, unless performance measures are in place, it is difficult for TSOs to counter criticisms of poor management and ineffectiveness. Measuring performance makes visible TSOs’ resources, activities and achievements, which leads to better informed discussions and decisions (Connolly and Hyndman, 2004). In the USA not-for-profit TSOs also face mounting pressure to demonstrate the effectiveness of their programmes. As Bradach et al.’s (2008, p. 90) framework asks “Which results will we hold ourselves accountable for? How will we achieve them? What will results really cost, and how can we fund them?”. Thus, in reporting performance measures to external users, a TSO is most likely to be responding to a demand for accountability, as well as marketing itself as a worthy recipient for future donations and grants (Crofts and Bisman, 2010; Connolly and Hyndman, 2004).

Accounting measures are a common basis to performance reporting. Yet, in Huang and Hooper’s (2011) study of philanthropic funders, it was stated that financial information was of limited use in choosing which TSOs to fund or to discharge accountability. Funders noted that non-financial information is more important; in particular TSOs’ reports on how they have delivered on their purpose or mission, and the community benefits provided. Huang and Hooper (2011) note that funders were also interested in what a TSO has learned from undertaking a particular project. This shows that learning is important for external providers as well as for improving organizational practice.

Nevertheless, Dhanani and Connolly (2012) found that TSOs’ performance reporting is more likely to be donor/funder led. In Kaplan and Grossman’s (2010) study, funders require TSOs to report against specific performance measures and achieve the results promised. Similarly, TSOs (mainly social enterprises) may respond to the promise of social investing by seeking to be a “highly performing” TSO which meets quantitative and financial measures (Alliance for Effective Social Investing, 2010).

Several authors determined that, as well as pressure from funders and donors, measurement emerges in moments of uncertainty, such as in the current economic uncertain times when gaining funding is difficult (Barman, 2007; Khumawala and Gordon, 1997; Lyon and Arvidson, 2011; Morris and Ogden, 2011; Pollitt, 1986; Szper and Prakash, 2011; Tooley et al., 2010). Yet, there are challenges and drawbacks to measuring performance.

2.2 Challenges and drawbacks In recent years researchers and consultants have derived economic measures such as social return on investment (SROI) and cost-benefit analysis (CBA) frameworks (Luke et al., 2013). Yet these quantitative measures are controversial due to the need to monetise outputs and outcomes that may not be traded in a marketplace; measures are

QRAM 10,3/4


also often infeasible. One problem cited is the cost in terms of data collection and analysis; many TSOs lack this specific expertise and must employ consultants to assist (Cnaan and Kang, 2010). A further challenge is the difficulty with attributing performance to a specific TSO. Ebrahim and Rangan (2010) note that it is easy to attribute the responsibility for a TSO’s success in a simple intervention such as an immunisation programme. However, when the TSO works with others towards more complex goals, such as economic and community development or advocacy, isolating a specific TSO’s success or failure for attribution is very difficult. In addition, unless experimental methods are used to isolate a treatment group and a control group, attributing any outcome in a beneficiary’s life to a specific TSO intervention, will also be challenged.

However, the need remains for TSOs to show the difference they make in their communities, to be clear about the outcomes they are working towards, and to use performance frameworks to utilise scarce resources effectively. Hyndman (1991) agreed that the reason many TSOs did not report their performance measurements was the difficulty in measuring them. More recently, this has been supported by Lee and Fisher (2007) who found that outcome measurements remain a challenge, particularly when the expected impact on beneficiaries is influenced by external environment factors which are outside the TSO’s control.

Lyon and Arvidson’s (2011) study found several other barriers to measuring performance. First, TSOs can perceive that performance measurement demands are externally imposed (for example, by funders) rather than internally. Second, in some organisations senior management might support performance measurement, but “there is internal resistance among staff” (Lyon and Arvidson, 2011, p. 3). Indeed, Cordery et al. (2013) found that a lack of organizational commitment to monetising the value of volunteers’ donations was a major barrier to reporting volunteers’ performance in TSOs. The need for staff and board support was further noted by MacIndoe and Barman (2012) who found that funders generally provide insufficient resourcing for performance measurement, but that committed staff measure performance even without funding with which to do it.

The annual regularity of financial reporting presents further difficulties, as noted by Aimers and Walker’s (2008) study, where TSOs found it difficult to demonstrate an immediate impact from their services since the intended effects may not be apparent for several years. Agyemang et al. (2009) also found that donors’ short-term reporting requirements did not consider the slow local decision-making processes of some beneficiary communities, and this made it even more difficult for TSOs to meet strict reporting deadlines.

Given these challenges, TSOs tend to use those measures which are easy to compile rather than the most appropriate measures, thus limiting the meaningfulness of performance reporting information (Agyemang et al., 2009; Lee and Fisher, 2007). Further, while the guide from the Alliance for Effective Social Investing (2010) recognises that performance information should be both positive and negative, TSOs are often afraid to report bad news in case it affects their future funding.

3. What performance is measured? Whether it is for internal learning or external accountability, third sector performance measurement focuses on: outputs, outcomes and impact. MacIndoe and Barman

Measuring performance in the third sector


(2012, p. 2) note that “the use of outcomes as the optimal sign of organizational performance replaced prior efforts to measure inputs [. . .] and outputs [. . .] as other indicators of organizational success”. Several authors agree with the need for TSOs to focus on their outputs and outcomes as a basis for performance measurement (Ashford, 1986; Barman, 2007; Cairns et al., 2005; Connolly and Hyndman, 2004; Grimwood and Tomkins, 1986; Hyndman and McMahon, 2010; Morris and Ogden, 2011; Szper and Prakash, 2011). In this section the terms used in performance measurement are defined, along with a survey of different tools and third party agencies which report TSO performance.

3.1 Outputs, outcomes and impact Outputs are defined as the goods and services that the organisation produces (Controller and Auditor-General, 2008; New Zealand Institute of Chartered Accountants, 2007). Outputs can be reported in terms of the proportion of total operating expenditure relating to the beneficiaries of the charity, or the total costs of services provided to beneficiaries (Institute of Chartered Accountants in Australia, 2009) where expenditure is used as a proxy for income received to cover those programmes. Alternatively, a simple quantitative measure for outputs is the number of programs and/or clients that are serviced. Efficiency can be defined as the relationship between an organisation’s inputs and outputs (Pollitt, 1986).

Outputs are deemed to be important to donors and funders, with research establishing they are concerned with the extent of expenditure on overheads such as fundraising and administration (Abraham, 2007; Anthony, 1978; Ashford, 1986; Palmer and Randall, 2002; Rees and Dixon, 1983). As shown in Table I, studies in the UK found that the rate at which large charities separately disclosed fund raising, publicity and administration expenses increased from 30 per cent in 1992 (Hines and Jones, 1992) to 71 per cent in 1997 (Connolly and Hyndman, 2000).

Outcomes can be defined as the change in beneficiaries’ circumstances brought about by the outputs, or the immediate products or services generated by the TSO.

1989/1990 accountsa 1996/1997 accountsb

n % n %

Separate disclosure Fund raising expenses Yes 23 58 57 71 No 17 42 23 29

40 100 80 100 Publicity expenses Yes 12 30 57 71 No 28 70 23 29

40 100 80 100 Administration expenses Yes 37 93 78 98 No 3 7 2 2

40 100 80 100

Sources: aHines and Jones (1992); bConnolly and Hyndman (2000)

Table I. Large charities that reported output information

QRAM 10,3/4


Outcomes are “the state, condition, impacts on, or consequences for the community, society, economy, or environment resulting from the existence and operations of the reporting entity” (Controller and Auditor-General, 2008, p. 41). As noted, outcomes are also referred to as “impact” and “social value”. Breckell et al. (2011) split outcomes by time with “impact” being the longer-term effects, and outcomes being the current effects. Pollitt (1986) asserted that effectiveness can be measured by the level of outputs utilised in producing outcomes, and the sustained production of benefits.

Nevertheless, as noted above, the need to monetise benefits in order to undertake a measure of effectiveness creates an issue. For example, the “New Zealand Community Law Centres o Aotearoa” sought to measure the outputs and outcomes of their legal service delivered by volunteer lawyers. They commissioned the New Zealand Institute of Economic Research (2012) which valued the casework that TSO undertook (outputs) as being worth $3.30 for every $1 of funding. Nevertheless, the outcomes in terms of information and education provided to those needing legal advice and advocacy for law reform were unable to be monetised as there was no market precedent. In this case, the TSO was unable to demonstrate quantitatively the benefits it delivers to its clients. Yet, a way to measure performance in terms of outcomes qualitatively has also presented real challenges to TSOs and researchers.

Gordon et al. (2009, p. 482) supported the move to include outcome data on the impact or accomplishments of charities but considered that this was not practical as “no one has found a way to measure and report on effectiveness and quality of services”. Independent assurance on TSOs’ performance indicators is likely to be costly, no matter the type of indicator or who undertakes it and it may be of limited benefit (Pendlebury et al., 1994). They hypothesised that there would be an inevitable emphasis given to measuring what was immediately measurable, rather than on what should be measured. This view was supported by Connolly and Hyndman (2004) who noted that if no verification of the performance measurements is required, there may be a temptation to present outcomes in a manner which is perceived as more acceptable to the reader, for example, by exaggerating good performance, regardless of its accuracy.

3.2 Third party assessments of TSOs’ performance As Connolly and Hyndman (2013) note, donors do not necessarily read TSOs’ performance reports. Accordingly, watchdog agencies have established as third party assessors of TSO performance, especially in the charity sector. Third parties include GuideStar in the USA, the UK, India and now globally, and the Better Business Bureau’s (BBB) Wise Giving Alliance which reports on national and some regional charities in the USA. A number of these organisations have also developed frameworks or guides to encourage TSOs to report in particular ways including: Charting Impact; Charity Navigator; New Philanthropy Capital (NPC), and the Inspiring Impact Group.

BBB Wise Giving Alliance ( and GuideStar USA (www. are rating agencies that have teamed with Independent Sector (www. (a third sector leadership forum) to develop a Charting Impact framework for TSO reporting ( This requires TSOs to measure their performance in relation to their mission (objectives) and results. Charting Impact was developed by nearly 200 leaders in the third sector to establish an industry standard for performance reporting which some see as a useful step towards performance reporting. Alternatively, NPC ( utilises

Measuring performance in the third sector


“The principles of good impact reporting” to provide a framework for performance measurement (Association of Chief Executives of Voluntary Organisations et al., 2012b). Another rating agency in the USA, Charity Navigator (www.charitynavigator. com) has taken a different approach by broadening their own evaluation system beyond financial health to include non-financial performance. This has seen Charity Navigator launch their new “Results Reporting” dimension in 2013 to rate charities according to a set of indicators.

In the UK, the Inspiring Impact Group ( is a collaboration of TSOs whose vision is to see more TSOs measuring their performance. They aim to do this by encouraging TSOs to utilise The Code of Good Impact Practice published by the National Council of Voluntary Organisations (NCVO). Currently the draft code is out for consultation (Inspiring Impact, 2013). Charitable TSOs in England and Wales are required by the Charity Commission to report on how they have delivered public benefit in respect of their charitable aims, as highlighted by Morgan and Fletcher (2013). This reporting requirement is in addition to the financial information these charities must file, although compliance leaves something to be desired.

Nevertheless, third party rating agencies’ specific focus on, for example, output measures of performance can place TSOs in a poor light. For example, Tinkelman’s (2009) analysis of Avon Product Foundation’s (Avon) breast cancer walks highlighted the limitation when performance of outputs only is analysed. In this case Avon’s walks did not meet the BBB Wise Giving Alliance (2003) guideline of spending no more than 35 per cent of donations on fund raising. Avon reacted by dismissing the organisation that ran the walks on its behalf to reduce its fund raising expenses, resulting in charity donations from the walks falling from US$145 million in 2002 to US$27 million in 2003. While Avon consequently met BBB Wise Giving Alliance’s guideline, that action resulted in a staggering reduction in the net funds available for breast cancer research.

In other research, TSOs have been shown to have shortcomings in their reporting of outputs and outcomes. Connolly and Dhanani’s (2009) UK study of TSOs found that 51 per cent of their survey participants failed to provide output and outcome information in their annual reports. In the USA, a similar study found that performance measurement was widespread, with 95 per cent of the sample providing output measures and 70 per cent outcomes (Salamon et al., 2010). Notwithstanding these higher levels of reporting in the USA, 80 per cent of the study’s respondents called for better tools to measure qualitative outcomes.

In the UK, NPC published results from a review of the annual reports, annual reviews, impact reports and web sites of 20 of the top 100 UK fundraising charities (Hedley et al., 2010). The study found that 90 per cent of charities reported their outputs, but only 41 per cent communicated their outcomes. In an extensive study by Cass Business School and the Charity Finance Directors’ Group (CFDG) only 8 per cent provided information on the impact these organisations had made (Breckell et al., 2011). (This UK study had three data streams: first 164 surveys from charity finance directors; second a review of 300 large fundraising charities; and third, focus groups with CFDG members.) Similarly in this issue, Morgan and Fletcher (2013) also found poor reporting across more than 1,400 charities.

A collection of case studies from organisations who are measuring outcomes was recently published by TSOs: the Association of Chief Executives of

QRAM 10,3/4


Voluntary Organisations, Charity Finance Directors’ Group, and New Philanthropy Capital (2012a). Their report synthesized the performance measurement experiences from nine TSOs. The authors considered that the case studies demonstrated how important it is that decision-makers are aided to understand impact; and that donors and funders are reassured that the TSO has a positive impact on their beneficiaries.

4. Approaches to performance measurement Bulmer (2001, p. 455 – emphasis added) states that “measurement is any process by which a value is assigned to the level or state of some quality of an object of study”. A plethora of approaches has been developed for TSO performance measurement, such that an exhaustive typology is impossible to provide (Polonsky and Grau, 2011). At its simplest, a listing could include methods delineated as “quantitative” and “qualitative”; yet such a listing would ignore the underlying ethos of performance measurement taken with these approaches. Indeed, as this special issue demonstrates, while the need to report has often led to a predominance of quantitative approaches (Barman, 2007), quantitative and qualitative methods have roles in performance measurement and are not mutually exclusive. We therefore group the main performance measurement approaches as being based on: economic/financial efficiency, programme theories, and strategy and participation.

4.1 Economic/financial efficiency approaches As noted above, the focus in the business sector on financial performance and economic efficiencies has driven the push for quantitative performance measures in TSOs, mainly for accountability purposes. Economic efficiency approaches expect TSOs to achieve an expected return and measure impact in financial terms. Approaches include: CBA; outcome rating scale (ORS); single outcome agreements (SOAs); social audit; social accounting and audit (SAA); and SROI (Brooks Jr, 1980; Gao and Zhang, 2006; Gibbon and Dey, 2011; Gray et al., 1997; Medawar, 1976; Miller et al., 2003; Natale and Ford, 1994; New Zealand Institute of Economic Research, 2012; Owen et al., 2000; Zadek, 1993). In addition, single measure valuation techniques also include replacement cost, opportunity cost and numerous stated preference techniques (for example, contingent valuation, choice experiment, and revealed preference methods) (Cnaan and Kang, 2010). These techniques assume there is a market for a TSO’s activities and that “customers” are present to value these activities.

SROI, developed by the Roberts Enterprise Development Foundation, utilises discounted cash flows to estimate an enterprise value, the programme’s social value (savings to taxpayers less the costs incurred) and blended value, expressing these calculations as indices of return on the “investment” in the TSO. While SROI is based on an ethos of economic efficiency, it recognises the need for mapping projected outcomes, which is an approach also required in theory of change (Section 4.2). A number of organisations employing social scientists have formed to advise TSOs on measuring impact and to contract to undertake these measures for TSOs. As there is a range of economic/financial methods, a well-planned assessment should integrate multiple stakeholders and consider longer term impacts while providing a single measure to communicate impact (Arvidson et al., 2010). Nevertheless, as noted in the New Zealand Institute of Economic Research (2012) report and the cases in Luke et al. (2013), many activities cannot be reduced to a single economic figure and, even when

Measuring performance in the third sector


they can, this measure will mask the extent of judgment required in calculations. Luke et al. (2013) also highlight the high level of resources this approach requires and the need for funder resources in order to undertake appropriate reporting. Arvidson et al. (2010) suggest that even when resources are available, SROI measures underestimate TSOs’ impacts.

4.2 Programme theory approaches Programme theories seek to summarise how successful interventions are linked to outputs, outcomes and impacts. These theories of change include approaches described as: inter alia, “intervention logic”; “logical frameworks”; “programme logic”; “results-based accountability” (RBA); and “theory of actions”. Logical frameworks (logframes) are the most widely used planning and evaluation tool in international development, although Gasper (2000) argues that their accountability focus means logframes cannot evaluate complex interventions which require TSOs to orientate themselves towards learning. Ideally, stakeholders build a consensus model of programme success and agree on measures of success, which means that baseline data can be collected initially and subsequent performance assessment linked to the programme goals. RBA ( shares similarities with logframes, and is commonly used as an accountability tool when governments contract domestically. Again, funders impose this strategic approach onto TSOs, with the TSO being expected to report its performance against the imposed plan.

The rigidities of logframes and RBA have spawned an approach termed “theory of change” which the developers (ActKnowledge, see argue is an “enlightened” version of logframes. With a heightened focus on mapping change at each level, theory of change approaches are more likely to require TSOs and funders to state their assumptions and to offer alternatives at decision points (Reisman and Gienapp, 2004). Evaluators may also be interested in methods that enable them to track pathways of change to understand how change occurs more generally, rather than specifically in one programme or TSO. As such, experimental and quasi-experimental methods are pushed by some funders as necessary to understand impact (Ebrahim and Rangan, 2010). Yet, as noted by Ebrahim and Rangan (2010), such methods are difficult to mobilise in complex situations. The ethical issues raised by providing a “treatment” to one beneficiary group and not another is another reason a TSO is unlikely to use experiments for measuring outcomes.

4.3 Strategic approaches The underlying ethos of all strategic approaches is that the TSO will measure and manage its performance in terms of its underlying strategy. In this issue, Payer-Langthaler and Hiebl (2013) clearly develop an understanding of performance in Benedictine Abbeys which enables these abbeys to identify the activities they should carry out in order to achieve the performance they desire (including balance). Also in this issue, Saj (2013) cites the values of an organisation as the underlying ethos behind performance reporting at board and executive level.

In developing strategic approaches to performance measurement and management, the business sector has relied on Kaplan and Norton’s balanced scorecard, which measures performance in both financial and non-financial terms (Länsiluoto and Järvenpää, 2008). TSOs have been encouraged to re-define which performance

QRAM 10,3/4


objectives to prioritise (Niven, 2008) for internal use. Outcome models (such as DoView are also designed for organisations to diagrammatically present internally-developed strategy and to develop management steps for the TSO to achieve those strategic goals.

In this issue, Tucker and Thorne (2013) analyse the manner in which performance against strategic goals is controlled within TSOs. While none of their interviewees’ organisations appeared to use developed methods such as balanced scorecard or software like DoView, cybernetic logic means that some TSOs in their study closely controlled activity in line with projected outputs and outcomes in order to discharge accountability to funders. Nevertheless, in other TSOs, performance information was more likely to support decision-making, result in informal control, and relate to organizational learning to attain the mission-related goals of the TSO.

4.4 Participatory approaches Advocacy and network TSOs that work in partnerships towards intangible goals are more likely to use outcome mapping and other participatory approaches to performance measurement and management. Outcome mapping differs from strategic and programme theory approaches as it is an evaluatory tool (rather than an accountability tool) that also requires the “boundary partners” to map how the behavioural change will occur and the strategies each will employ to achieve the collaboratively agreed mission. Developed and used by the International Development Research Centre in Canada, an outcome mapping learning community has developed in which various case studies and developments are shared ( These discussions recognise that TSOs have a sphere of control (over their own work) and direct influence over their boundary partners, but only indirect influence on beneficiaries when they rely on partners to deliver programmes. Nevertheless, a useful tool to ameliorate this limitation is to require boundary partners to maintain outcome journals (Earl et al., 2001).

Other participatory approaches include the most significant change (MSC) approach through which beneficiaries are encouraged to share the MSCs in their lives (Dart and Davies, 2003). Other terms used are: “the evolutionary approach to organisational learning”, “the narrative approach” and also the “story approach”. The MSC approach is qualitative and requires TSOs to establish where change occurs, to collect and review stories of change and then to filter these narratives through the TSO’s various managerial levels. The highest MSC narratives are chosen and sent to funders who are invited to select which stories represent the change they seek to fund and why, enabling the TSO to target programmes at specific funding opportunities (Dart and Davies, 2003). A potentially powerful tool, it nevertheless tends to focus on positive (rather than negative) stories, is resource intensive and lacks the comparability that is assumed in more quantitative approaches (Dart and Davies, 2003). Life-story approaches are another way of collecting narratives from beneficiaries, but are most likely to be used for organizational learning, rather than as funding and accountability documents. Each of these narrative approaches are participatory in that they allow the beneficiary’s voice to be heard.

5. Reflections and research agenda Challenges with quantitative performance measurement provided the impetus for this special issue, to focus on the qualitative aspects rather than (or in concert with)

Measuring performance in the third sector


quantitative aspects which cannot provide deep understanding regarding TSO’s performance. In-depth qualitative studies such as contained in this special issue aim to bring increased understanding of the nuances of performance measurement and management in the third sector.

This literature review has dichotomised performance measurement and management into methods and actions required to discharge accountability, and those undertaken in order to improve practice within a TSO. While research into accountability in TSOs has expanded in both accounting and management spheres in recent times (Abraham, 2007; Agyemang et al., 2009; Aimers and Walker, 2008; Brown and Caughlin, 2009; Dhanani and Connolly, 2012; Ebrahim, 2003; Gordon et al., 2009; Hyndman, 1991; Morris and Ogden, 2011; Pendlebury et al., 1994; Valentinov, 2011), internal performance measurement and management has been largely left to the evaluation discipline. In this issue, Tucker and Thorne (2013), Payer-Langthaler and Hiebl (2013) and Saj (2013) consider performance and performance reporting within organisations, while Connolly and Hyndman (2013) and Morgan and Fletcher (2013) consider how TSOs report that performance to key stakeholders. Their analyses raise questions about the role of strategy in performance assessments and management. Further research into how TSOs can balance funders’ demands for accountability with the need to evaluate and improve internal performance is one avenue for enquiry. In addition, how do (or can) TSOs strengthen their reporting so that it informs strategy and ensures the organisation stays close to its values, rather than being merely “what the funder/donor wants”?

Further, prior accountability and performance literature focuses on funders and grant-makers as resource providers, and very seldom considers resources provided by volunteers, supporters, nor the needs of beneficiaries (service recipients). O0Brien and Tooley (2013) provides a sobering reminder that performance reporting in TSOs which focuses on funders as resource providers, runs the risk of ignoring the “felt” (moral) accountability due to volunteers and supporters. As well as O0Brien and Tooley (2013), prior literature has analysed deficiencies in reporting volunteer contributions measured on a purely quantitative basis (Cordery et al., 2013). The development of mixed methods to more accurately reflect the significant contribution to the third sector of volunteers is long overdue. Research into organisations that are experimenting with narrative and participatory approaches such as MSC are sorely needed. Further, due to the subjectivity of narrative approaches, it is important to analyse how these can be compared and “measured” over time and between organisations to ascertain effectiveness. It may be that nuanced quantifications of contributions can be designed to make progress in this area.

Perhaps the largest challenge to performance measurement is attribution. The cost of gathering data was one challenge highlighted by Luke et al. (2013). In addition, greater collaboration between TSOs is occurring in part because the economy has reduced funding opportunities and also because the push for smaller government has resulted in a need for governments to write fewer and larger contracts. This means TSOs are more likely to collaborate in programme activity. This may increase the ability of a collaborative group to claim attribution – it should bring about a better understanding of working together to achieve outcomes. Nevertheless, future research is urgently needed into how participatory methods such as outcome mapping can be achieved operationally and subsequently reported at an organizational level.

QRAM 10,3/4


The rise of third party performance information providers (including government departments involved in shared contracts) inserts distance into the organisation-funder relationship. Research into the effect on trust and accountability demands would enable accounting and management researchers to ascertain the success of initiatives such as high trust contracting, third party information gathering and regulation and increased reporting requirements in the third sector.

Despite the shortcomings of economic efficiency approaches to measurement, TSOs continue to experience pressure to report in quantitative forms. We note that these approaches require substantial judgment about methods, assumptions with respect to market and future values. The qualitative and mixed-method approaches introduced in this special issue also feature ambiguity and bias, nevertheless, as a result of this research, we look forward to measuring and managing TSO performance in a more nuanced and understanding manner.


Abraham, A. (2007), “Tsunami swamps aid agency accountability: government waives requirements”, Australian Accounting Review, Vol. 17 No. 1, pp. 4-12.

Agyemang, G., Awumbila, M., Unerman, J. and O’Dwyer, B. (2009), NGO Accountability and Aid Delivery, The Association of Chartered Certified Accountants, London, available at: www. (accessed 18 November 2009).

Aimers, J. and Walker, P. (2008), “Alternative models of accountability for third sector organisations in New Zealand”, paper presented at the International Society for Third Sector Research Eighth International Conference, Barcelona, Spain, July, available at:¼1165 (accessed 23 February 2009).

Alliance for Effective Social Investing (2010), “Background”, available at: Background.html (accessed 16 November 2010).

Anthony, R.N. (1978), Financial Accounting in Nonbusiness Organizations, Financial Accounting Standards Board, Stamford, CT.

Arvidson, M., Lyon, F., McKay, S. and Moro, D. (2010), “The ambitions and challenges of SROI”, Working Paper 49, Third Sector Research Centre, December, available at: Research/EconomicandSocialImpact/TheambitionsandchallengesofSROI/tabid/762/Default. aspx (accessed 23 March 2013).

Ashford, J.K. (1986), Accounting in Charities, Chartered Institute of Management Accountants, London.

Association of Chief Executives of Voluntary Organisations, Charity Finance Directors’ Group, and New Philanthropy Capital (2012a), “Principles into practice: how charities and social enterprises communicate impact”, available at: into-practice/ (accessed 12 April 2012).

Association of Chief Executives of Voluntary Organisations, Charity Finance Directors’ Group, Institute of Fundraising, National Council for Voluntary Organisations, New Philanthropy Capital, Small Charities Coalition, . . . Social Return on Investment Network (2012b), “Principles of good impact reporting”, available at: principles-into-practice/ (accessed 12 April 2012).

Barman, E. (2007), “What is the bottom line for nonprofit organizations? A history of measurement in the British voluntary sector”, VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, Vol. 18 No. 2, pp. 101-115.

Measuring performance in the third sector


BBB Wise Giving Alliance (2003), “Standards for charity accountability”, available at: www.bbb. org/us/Charity-Standards/ (accessed 2 March 2010).

Bradach, J.L., Tierney, T.J. and Stone, N. (2008), “Delivering on the promise of nonprofits”, Harvard Business Review, Vol. 86 No. 12, pp. 88-97.

Breckell, P., Harrison, K. and Robert, N. (2011), Impact Reporting in the UK Charity Sector, Charity Finance Directors Group & Cass Business School, London, available at: http://cfg., /media/Files/Resources/Impact%20Reporting%20in%20the%20UK% 20Charity%20Sector.ashx (accessed 19 November 2012).

Brooks, L.J. Jr (1980), “An attitude survey approach to the social audit: the Southam Press experience”, Accounting, Organizations and Society, Vol. 5 No. 3, pp. 341-355.

Brown, E. and Caughlin, K. (2009), “Donors, ideologues, and bureaucrats: government objectives and the performance of the nonprofit sector”, Financial Accountability and Management, Vol. 25 No. 1, pp. 99-114.

Bulmer, M. (2001), “Social measurement: what stands in its way?”, Social Research, Vol. 68 No. 2, pp. 455-480.

Cairns, B., Harris, M., Hutchison, R. and Tricker, M. (2005), “Improving performance? The adoption and implementation of quality systems in UK nonprofits”, Nonprofit Management & Leadership, Vol. 16 No. 2, pp. 135-151.

Cnaan, R.A. and Kang, C. (2010), “Toward valuation in social work and social services”, Research on Social Work Practice, No. 4, pp. 388-396.

Connolly, C. and Dhanani, A. (2009), Narrative Reporting by UK Charities, Research Report 109, Association of Chartered Certified Accountants, available at: general/activities/research/research_archive/rr-109-001.pdf (accessed 18 November 2009).

Connolly, C. and Hyndman, N. (2000), “Charity accounting: an empirical analysis of the impact of recent changes”, British Accounting Review, Vol. 32, pp. 77-100.

Connolly, C. and Hyndman, N. (2004), “Performance reporting: a comparative study of British and Irish charities”, The British Accounting Review, Vol. 36 No. 2, pp. 127-154.

Connolly, C. and Hyndman, N. (2013), “Charity accountability in the UK: through the eyes of the donor”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 259-278.

Controller and Auditor-General (2008), The Auditor-General’s Observations on the Quality of Performance Reporting, Controll and Auditor-General, Wellington, available at: www.oag. (accessed 20 April 2009).

Cordery, C.J., Proctor-Thomson, S.B. and Smith, K.A. (2013), “Towards communicating the value of volunteers: lessons from the field”, Public Money & Management, Vol. 33, January, pp. 47-54.

Crofts, K. and Bisman, J. (2010), “Interrogating accountability: an illustration of the use of Leximancer software for qualitative data analysis”, Qualitative Research in Accounting & Management, Vol. 7 No. 2, pp. 180-207.

Dart, J. and Davies, R. (2003), “A dialogical, story-based evaluation tool: the most significant change technique”, American Journal of Evaluation, Vol. 24 No. 2, pp. 137-155.

Dhanani, A. and Connolly, C. (2012), “Discharging not-for-profit accountability: UK charities and public discourse”, Accounting, Auditing & Accountability Journal, Vol. 25 No. 7, pp. 1140-1169.

Earl, S., Carden, F. and Smutylo, T. (2001), Outcome Mapping: Building Learning and Reflection into Development Programs, International Development Research Centre, Ottawa,

QRAM 10,3/4


available at: PublicationID¼121 (accessed 23 March 2013).

Ebrahim, A. (2003), “Accountability in practice: mechanisms for NGOs”, World Development, Vol. 31 No. 5, pp. 813-829.

Ebrahim, A. and Rangan, V.K. (2010), The Limits of Nonprofit Impact: A Contency Framework for Measuring Social Performance, Harvard Business School, Boston, MA, available at: (accessed 28 February 2013).

Gao, S. and Zhang, J.J. (2006), “Stakeholder engagement, social auditing and corporate sustainability”, Business Process Management Journal, Vol. 12 No. 6, pp. 722-740.

Gasper, D.E.S. (2000), “Evaluating the ‘logical framework approach’ towards learning-oriented development evaluation”, Public Administration and Development, Vol. 20, pp. 17-28.

Gibbon, J. and Dey, C. (2011), “Developments in social impact measurement in the third sector: scaling up or dumbing down?”, Social and Environmental Accountability Journal, Vol. 31 No. 1, pp. 63-72.

Gordon, T.P., Knock, C.L. and Neely, D.G. (2009), “The role of rating agencies in the market for charitable contributions: an empirical test”, Journal of Accounting & Public Policy, Vol. 28 No. 6, pp. 269-484.

Gray, R., Dey, C., Owen, D.L., Evans, R. and Zadek, S. (1997), “Struggling with the praxis of social accounting: stakeholders, accountability, audits and procedures”, Accounting, Auditing & Accountability Journal, Vol. 10 No. 3, pp. 325-364.

Grimwood, M. and Tomkins, C. (1986), “Value for money auditing – towards incorporating a naturalistic approach”, Financial Accountability & Management, Vol. 2 No. 4, pp. 251-272.

Hedley, S., Keen, S., Lumley, T., Ni Ogain, E., Thomas, J. and Williams, M. (2010), Talking About Results, New Philanthropy Capital, London, available at: download/default.aspx?id¼1134 (accessed 12 April 2012).

Hines, A. and Jones, M.J. (1992), “The impact of SORP on the UK charitable sector: an empirical study”, Financial Accountability & Management, Vol. 8 No. 1, pp. 49-67.

Huang, H.J. and Hooper, K. (2011), “New Zealand funding organisations: how do they make decisions on allocating funds to not-for-profit organisations?”, Qualitative Research in Accounting & Management, Vol. 8 No. 4, pp. 425-449.

Hyndman, N. (1991), “Contributors to charities – a comparison of their information needs and the perceptions of such by the providers of information”, Financial Accountability & Management, Vol. 7 No. 2, pp. 69-82.

Hyndman, N. and McMahon, D. (2010), “The evolution of the UK charity statement of recommended practice: the influence of key stakeholders”, European Management Journal, Vol. 28 No. 6.

Inspiring Impact (2013), The Code of Good Impact Practice, Vol. 24, National Council of Voluntary Organisations, London, available at: http://inspiringimpact.files.wordpress. com/2013/02/code-of-good-impact-practice-mar-2013.pdf (accessed 24 February 2013).

Institute of Chartered Accountants in Australia (2009), The Essential for Transparent Reporting: Best Practice Reporting, ICAA, Sydney, available at: leadership/reporting (accessed 11 May 2009).

Johns Hopkins Institute for Policy Studies (2003), Handbook on Non-profit Institutions in the System of National Accounts, Vol. 28, United Nations, New York, NY, available at: www. (accessed 28 October 2006).

Kaplan, R.S. and Grossman, A.S. (2010), “The emerging capital market for nonprofits”, Harvard Business Review, Vol. 88 No. 10, pp. 110-118.

Measuring performance in the third sector


Khumawala, S.B. and Gordon, T.P. (1997), “Bridging the credibility of GAAP: individual donors and the new accounting standards for nonprofit organisations”, Accounting Horizons, Vol. 11 No. 3, pp. 45-68.

Länsiluoto, A. and Järvenpää, M. (2008), “Environmental and performance management forces: integrating ‘greenness’ into balanced scorecard”, Qualitative Research in Accounting & Management, Vol. 5 No. 3, pp. 184-206.

Lecy, J.D., Schmitz, H.P. and Swedlund, H. (2012), “Non-governmental and not-for-profit organizational effectiveness: a modern synthesis”, VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, Vol. 23 No. 2, pp. 434-457.

Lee, J. and Fisher, G. (2007), “The perceived usefulness and use of performance information in the Australian public sector”, Accounting, Accountability & Performance, Vol. 13 No. 1, pp. 42-73.

Luke, B., Barraket, J. and Eversole, R. (2013), “Measurement as legitimacy versus legitimacy of measures – performance evaluation of social enterprise”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 234-258.

Lyon, F. and Arvidson, M. (2011), “Social impact measurement as an entrepreneurial process”, Third Sector Research Centre, available at: fileticket¼Etz5o0ewsw0%3D&tabid¼853 (accessed 28 February 2013).

MacIndoe, H. and Barman, E. (2012), “How organizational stakeholders shape performance measurement in nonprofits: exploring a multidimensional measure”, Nonprofit and Voluntary Sector Quarterly, 16 May (available online).

Medawar, C. (1976), “The social audit: a political view”, Accounting, Organizations and Society, Vol. 1 No. 4, pp. 389-394.

Miller, S.D., Duncan, B.L., Brown, J., Sparks, J.A. and Claud, D.A. (2003), “The outcome rating scale: a preliminary study of the reliability, validity, and feasibility of a brief visual analog measure”, Journal of Brief Therapy, Vol. 2 No. 2, pp. 91-100.

Morgan, G.G. (2013), “Purposes, activities and beneficiaries: assessing the use of accounting narratives as indicators of third sector performance”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 295-315.

Morris, T. and Ogden, S.M. (2011), “Funder demands for quality management in the non-profit sector: challenges and responses in a non-profit infrastructure network”, Public Money & Management, Vol. 31 No. 2, pp. 99-106.

Munir, R., Baird, K. and Perera, S. (2013), “Performance measurement system change in an emerging economy bank”, Accounting, Auditing & Accountability Journal, Vol. 26 No. 2, pp. 196-233.

Munir, R., Perera, S. and Baird, K. (2011), “An analytical framework to examine changes in performance measurement systems within the banking sector”, Australasian Accounting Business & Finance Journal, Vol. 5 No. 1, pp. 93-115.

Natale, S.M. and Ford, J.W. (1994), “The social audit and ethics”, Managerial Auditing Journal, Vol. 9 No. 1, pp. 29-33.

New Zealand Institute of Chartered Accountants (2007), TPA-9 Service Performance Reporting, NZICA, Wellington, available at:¼NZEIFRS_ 2009_Volume_files&Template¼/CM/ContentDisplay.cfm&ContentID¼15702 (accessed 23 May 2009).

New Zealand Institute of Economic Research (2012), Benefits of Community Law: Indicative CBA of CLCs, New Zealand Institute of Economic Research, Wellington, available at:

QRAM 10,3/4

210 ort-proves-community-law-provides-sound-value-for-money/ (accessed 23 March 2013).

Nicholls, A. (2009), “We do good things, don’t we? Blended value accounting in social entrepreneurship”, Accounting, Organizations and Society, Vol. 34 Nos 6/7, pp. 755-769.

Niven, P.R. (2008), Balanced Scorecard: Step-by-Step for Government and Nonprofit Agencies, 2nd ed., Wiley, New York, NY.

O0Brien, E. and Tooley, S. (2013), “Accounting for volunteer services: a deficiency in accountability”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 279-294.

Owen, D.L., Swift, T.A., Humphrey, C. and Bowerman, M. (2000), “The new social audits: accountability, managerial capture or the agenda of social champions?”, European Accounting Review, Vol. 9 No. 1, pp. 81-98.

Palmer, P. and Randall, A. (2002), Financial Management in the Voluntary Sector: New Challenges, Rutledge, London.

Payer-Langthaler, S. and Hiebl, M.R.W. (2013), “Towards a definition of performance for religious organizations and beyond: a case of Benedictine abbeys”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 213-233.

Pendlebury, M., Jones, R. and Karbhari, Y. (1994), “Developments in the accountability and financial reporting practices of executive agencies”, Financial Accountability & Management, Vol. 10 No. 1, pp. 33-46.

Pollitt, C. (1986), “Beyond the managerial model: the case for broadening performance assessment in government and the public services”, Financial Accountability & Management, Vol. 2 No. 3, pp. 155-170.

Polonsky, M. and Grau, S.L. (2011), “Assessing the social impact of charitable organizations – four alternative approaches”, International Journal of Nonprofit and Voluntary Sector Marketing, Vol. 211, May, pp. 195-211.

Rees, J. and Dixon, B.R. (1983), Accounting for Non-profit Organisations, New Zealand Institute of Chartered Accountants, Wellington.

Reisman, J. and Gienapp, A. (2004), Theory of Change: A Practical Tool For Action, Results and Learning, Annie E. Casey Foundation, Organizational Research Services, available at:¼{33431955-1255-47F4- A60B-0F5F3AABA907} (accessed 23 March 2013).

Saj, P. (2013), “Charity performance reporting: comparing board and executive roles”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 347-368.

Salamon, L.M. (2010), “Putting the civil society sector on the economic map of the world”, Annals of Public and Cooperative Economics, Vol. 81 No. 2, pp. 167-210.

Salamon, L.M., Geller, S.L. and Mengel, K.L. (2010), “Nonprofits, innovation, and performance measurement: separating fact from fiction”, Communiqué, No. 17.

Szper, R. and Prakash, A. (2011), “Charity watchdogs and the limits of information-based regulation”, VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, Vol. 22 No. 1, pp. 112-141.

Tinkelman, D. (2009), “Unintended consequences of expense ratio guidelines: the Avon breast cancer walks”, Journal of Accounting & Public Policy, Vol. 28 No. 6, pp. 485-494.

Tooley, S., Hooks, J. and Basnan, N. (2010), “Performance reporting by Malaysian local authorities: identifying stakeholder needs”, Financial Accountability & Management, Vol. 26 No. 2, pp. 103-133.

Measuring performance in the third sector


Tucker, B. and Thorne, H. (2013), “Performance on the right hand side: organizational performance as an antecedent to management control”, Qualitative Research in Accounting & Management, Vol. 10 Nos 3/4, pp. 316-346.

Valentinov, V. (2011), “Accountability and the public interest in the nonprofit sector: a conceptual framework”, Financial Accountability & Management, Vol. 27 No. 1, pp. 32-42.

Wimbush, E. (2009), “Debate: accountability for outcomes – international lessons”, Public Money & Management, Vol. 30 No. 1, pp. 8-10.

Zadek, S. (1993), “The social audit of Traidcraft plc”, Social and Environmental Accountability Journal, Vol. 13 No. 2, pp. 5-6.

Further reading

Wimbush, E. (2011), “Implementing an outcomes approach to public management and accountability in the UK – are we learning the lessons?”, Public Money & Management, Vol. 31 No. 3, pp. 211-218.

Corresponding author Rowena Sinclair can be contacted at:

QRAM 10,3/4


To purchase reprints of this article please e-mail: Or visit our web site for further details:

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Comments are closed.