Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-13T02:16:45.609Z Has data issue: false hasContentIssue false

Funding Australian economics research: Local benefits?

Published online by Cambridge University Press:  01 January 2023

Anita Doraisami*
Affiliation:
Federation University Australia, Australia
Alex Millmow
Affiliation:
Federation University Australia, Australia
*
Anita Doraisami, Federation Business School, Federation University Australia, University Drive, Mt. Helen, Ballarat, VIC 3350, Australia. Email: a.doraisami@federation.edu.au
Rights & Permissions [Opens in a new window]

Abstract

In Australia there is a systematic ranking of academic research performance, with a major impact metric being based on publications in prestigious journals. Other countries like Britain with its Research Excellence Framework also have similar metrics. While much analysis and publicity is devoted to the rankings of the quality of research, there has been very little focus on how this ranked research has then gone on to make a public policy impact. In the case of the economics discipline, there has been little exploration of the relationship between publication in a high-ranked journal and contribution to an analysis of Australia’s most pressing economic issues. This article investigates the extent to which articles in the Diamond list of journals from 2001 to 2010 addressed Australian economic issues. Our results indicate that articles on current policy issues accounted for a very modest fraction of total Diamond list journal articles. One possible explanation for this finding, which is investigated further, is the correlation between an economics department’s Excellence in Research Australia ranking and the number of staff who obtained their doctorates from an overseas university. Such a correlation has implications for the status afforded to economics research with a specific national focus.

Type
Articles
Copyright
© The Author(s) 2016

Introduction

The Australian Excellence in Research Australia (ERA) evaluation process is designed as a comprehensive quality evaluation of research produced in Australian universities against national and international benchmarks. Ratings for each field of research are undertaken for each university and moderated by committees of distinguished Australian and international researchers. Quality is defined largely in terms of impact, and impact in turn is defined using indicators such as citation metrics and, in some social sciences, peer review (Australian Research Council (ARC), 2016). The ERA process has now been through three rounds: in 2010, 2012 and 2015. This article, based on research undertaken before the completion of the most recent round, identifies effects of this particular definition of quality that are still highly significant. The impact in question is that on the academic publishing community, not social impact or benefit. We argue that in the field of economics, a perverse and possibly unintended consequence has been a devaluation of research with a specifically national or local focus or policy orientation, and hence of researchers working in such fields. A flow-on impact has been a tendency towards the recruitment of staff with international profiles based on overseas training. As a result, the nation is losing the benefit that could derive from applying local research training and knowledge to pressing issues of national and international importance.

While the ERA process no longer ranks individual journals as part of its assessment process, journal ranking still matters to departmental heads, particularly in business schools, and hence to researchers when they are selecting a publication outlet. For instance, the Australian Business Deans Council still issues its own comprehensive rating of the ‘quality’ of business and economics journals. The top Australian university economics departments have adopted the policy of instructing staff to aim for the most prestigious journals, usually published overseas, and to eschew publishing in low-ranked journals. This article tests two apparently widespread beliefs. First, it is thought that in Australian business schools, there are only rather limited publication opportunities for Australian-specific content in international journals. Second, and consequentially, leading economics departments in Australia are believed to have been seeking to internationalise their research profiles by internationalising their faculties, recruiting overseas-trained economists on both short-term and long-term contracts, particularly researchers with strong publication records in elite economic journals. This behaviour is seen as being designed to ensure that a university’s economics department achieves a top research ranking in the audit performed by the ARC.

The article seeks to examine whether, how and to what extent the stated objectives of the ERA – optimising national economic, environmental, cultural and social benefits – might apply to the economics discipline. In particular, it examines whether the specific definitions of ‘quality’ and ‘impact’ adopted have included a focus on how research impact has then gone on to make a social impact by contributing to analysing Australia’s most pressing economic issues and the most appropriate policy responses to these issues. The article seeks to assess the extent to which the contribution of Australian authors to what is deemed the best economic research in the world provides insights into Australian economic issues. Importantly, it identifies whether such publications have added to the plurality of research and opinion which shapes economic policy response to public issues in Australia. The study begins with a literature review analysing the rise of research and journal ranking systems and analysing their intended and unintended outcomes. The next section of this article outlines the methodology and approach to data collection adopted in the study. The section ‘Discussion of findings’ outlines the main findings, discusses their implications and examines some possible factors which may have accounted for the findings. The concluding section examines some policy implications of the findings.

Context and literature review

The ARC website states that the five objectives of the ERA are to establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australian higher education institutions; provide a national stocktake of discipline level areas of research strength and areas of development opportunity; identify excellence ‘across the full spectrum of research performance’; identify emerging research areas and development opportunities for further development; and facilitate intranational and international comparisons of research for all discipline areas (ERA FAQs, ARC, 2016).

In 2013, the Australian Government engaged the consultancy firm ACIL Allen Consultancy to undertake a ‘benefits review’ of the ERA process. The Final Report from this process stated that although the ERA had not been going for long, it was already beginning to offer benefits that included an increase in the social rate of return of research, cost savings, increased university revenue, enhanced economic activity and improvements in ‘accountability, transparency and policy-making’ (ACIL Allen Consulting, 2013: viii). The ‘social rate of return’ was defined as ‘the permanent increase in GDP [Gross Domestic Product]’ as a percentage of the dollar cost of investment leading to this increase. This estimated rate of return was used as an indicator of ‘the economic, environmental, cultural and social impacts of research’ and as ‘a broad measure of return on investment or value creation resulting from research’ (ACIL Allen Consulting, 2013: 15). Impacts were defined in terms of research quality, focus, collaboration and resource allocation (including human resources and planning). Quality was assessed by national and international benchmarking using metrics such as citation profiles and (in some disciplines) peer review of research outputs (ACIL Allen Consulting, 2013). So not only was there no advance on the metrics proposed for identifying impacts, the social rate of return against which they were to be assessed was the narrow, immediate and hard-to-measure one off effect on GDP.

In the context of calls by business interests for the greater industry relevance of universities, the Australian Government (2015) in December released a National Innovation and Science Agenda. This Agenda responded to calls for clearer identification of the ‘benefits’ of research, by seeking views on a new ‘national impact and engagement assessment’. In 2016, input was sought from universities, industry and other ‘end-users of research’ to develop quantitative and qualitative measures of ‘impact and engagement’, with a pilot assessment to be run in 2017. The claim that Australia has not performed well compared with other Organisation for Economic Cooperation and Development (OECD) countries in industry–research collaboration is not contested here. It is of course easier for governments to exert pressure on universities than on industry to bridge this gap. It remains to be seen how widely or narrowly beyond such ‘end-users’, the social net of ‘engagement’ and ‘beneficiaries’ will be drawn. Certainly, the ARC itself defines ‘impact’ more broadly than the definition implied in the ERA’s used of metrics. The ARC’s (2015) glossary of terms, linked from its website ‘Research impact principles and framework’, defines ‘impact’ as

the demonstrable contribution that research makes to the economy, society, culture, national security, public policy or services, health, the environment, or quality of life, beyond contributions to academia.

These impacts are among those defined as ‘benefits’ in the ‘research impact pathway’ (ARC, 2015), linked from the same website. The benefits are the final stage in the pathway, from ‘outputs’ to ‘outcomes’. Alongside citations, ‘outcomes’ include ‘implementation of programs and policy’. It remains to be seen whether this broader view of impact pathways will alter perceptions of economics and business research and attendant employment practices.

Reference MarginsonMarginson (2009) has argued that the rise of a knowledge economy has generated pressures to regulate what he calls ‘public good knowledge’ by assigning hierarchies of status and value to it. He cited instances such as the production of ‘institutional league tables, research rankings, publication and citation metrics, journal hierarchies, and other comparative output measures such as outcomes of student learning’ (p. 1). Marginson argues that the emergence of the Shanghai Jiao Tong University index in 2003 was the first attempt to rank universities on the basis of transparent evidence rather than reputation which is confined to research. This index, Marginson argues, was flawed in focusing only on research and depending too much on nomination- and reputation-based Nobel Prize winning. He points to a further drawback: namely that it rewards the university of current employment, although he does not explictly identify the incentive thus created for efforts to ‘poach’ high-flying academics from the university where their research was undertaken. As Marginson notes, the drive to rank universities and departments has generated a proliferation of bibliometric and journal ranking systems.

The critical literature on bibliometric and other data systems and their use to generate ‘league tables’ has been of two types: technical critiques of method (Reference Adler and HarzingAdler and Harzing, 2009; Reference HussainHussain, 2015) and analyses of impacts. We focus here on the latter. The term ‘league table’ of course is a sporting metaphor, and several commentators refer to the need, particularly in business and economics faculties, to ‘play the rankings game’ as a condition of maintaining legitimacy. Reference WedlinWedlin (2006) traces the intensification of rating practices in the economics field to the spread of the MBA and management education more broadly, and the movement of institutions such as the Financial Times into a field that became the meeting place of consumer market and audit-based regulation. Particularly in countries (such as Australia) where higher education forms a large segment of the export market, universities are under considerable pressure from government to achieve the global rankings that will attract international students. Within countries such as Australia, too, the combination of competition for local student fee revenue in the education ‘market’ and competition for government funds linked to metrics-based accountability appears to have resulted in the emergence of a strategic approach to ‘branding’, based on climbing the various international rankings ladders.

One effect has been the recruitment of high-profile international scholars (see, for example, UNSW Media Office, 2016). It is not suggested that Australian universities would be engaged in the cruder ‘gaming’ aspects of external recruitment, of the kind criticised by the UK Stern Report, such as cross-national appointments of high-flying international scholars or the movement of staff to new institutions shortly before a research quality assessment census date (Reference SternStern, 2016: 12). It is sufficient for the present analysis that such a recruitment strategy has the potential effect of restricting the local academic labour market for those whose doctoral research has focused on Australian local issues. Hence, high-profile research in economic and labour relations policy fields that require local understanding is likely to be displaced by research in more internationally generic fields.

A second and related effect of efforts to climb the journal ranking league ladder is said to have been the narrowing of research and publication focus to studies in the mould of publications favoured in top-ranking journals. Writing about the UK Association of Business Schools’ journal listing, Reference WillmottWillmott (2011) used a fetish analogy to argue that publication in a journal with a rating of 4 (roughly equivalent in Australia, to A*) had become a substitute fixation, pursued for its own sake; however, like some bondage practices, it had also become an asphyxiation risk, constraining (his word was ‘perverting’) scholarship and inquiry. Reference Rafols, Leydesdorff and O’HareRafols et al. (2011) and Reference Nedeva, Boden and NugrohoNedeva et al. (2012) identified the types of research likely to be constricted in this way: the development of new research areas, more specialised research and multidisciplinary methodologies. Reference HussainHussain (2015) argues that as a result, there is a threat to ‘the long term growth and enrichment of the academic environment for a generation’ (p. 135). Hussain argues that the ranking methodology adopted is inappropriate to a field as diverse as the social sciences, where there is no one ‘platonic’ standard of quality, where paradigms compete and may be incommensurable and where there is no clear basis for weighting the multiple factors that may be used as criteria.

In relation to the economics discipline in Australia, Reference Neri and RodgersNeri and Rodgers (2015) examined the publication record of Australian academics in outlets deemed the world’s best during the period covered by this article, 2001–2010. Compiling and analysing a database of over 26,000 publications in the world’s ‘45 top journals’, they found that Australia’s output, in absolute and relative terms, converged to the levels of the most research-intensive countries on a per capita basis and was distributed more widely than previously across the nation’s universities. These findings, however, do not address the two specific issues that we are addressing here. The first is to what extent Australian economists publishing between 2001 and 2010 had received their research training in Australia. The second is the extent to which they were writing about local issues in a way that would beneficially address local policy issues.

Methodology and data collection

For each of the journals listed in the Diamond list (see Reference DiamondDiamond, 1989) for the period 2001–2010, we extracted information on the names of authors, their institutional affiliation, the title of the article, the year it was published and the issue and volume of the publication. The total number of articles where there was at least one author affiliated to an Australian economics department or business school was then selected. We removed from this selection book reviews, review articles, obituaries, replies and rejoinders.

In this time frame, 105 articles were published in Economic Letters, making it by far the journal where the most articles were published. However, given that articles published in Economic Letters are subject to a 2000-word limit and of a brief communication in nature, and therefore not strictly comparable with full-length articles published in the other 26 journals, it was decided to exclude this journal from our analysis. The total number of articles selected, excluding Economic Letters, amounted to 327 articles.

The proportionate share of Australian authorship was then determined based on the Australian affiliation to an Australian university department of economics or business school so that for each journal, the proportionate share of overseas affiliation was excluded, leaving just the Australian authorship of the 327 articles; this amounted to 193.44 articles as the total number of Equivalent Full Papers (EFPs) Footnote 1 published in the Diamond list. The EFPs by journal and institution are shown in Tables 1 and 2.

Table 1. Number of Equivalent Full Papers (EFPs) published by Australian economics academics in the Diamond list by journal 2001–2010.

Table 2. Number of Equivalent Full Papers (EFPs) published by Australian economics departments in the Diamond list 2001–2010.

ANU: Australian National University; QUT: Queensland University of Technology; UNE: University of New England; UNSW: University of New South Wales; UTS: University of Technology Sydney; UWA: University of Western Australia; UWS: University of Western Sydney.

Discussion of findings

As Table 1 indicates, six journals accounted for more than 50% of all articles with the Journal of Econometrics alone accounting for 15% of all publications in this period. In total, 8% of articles were published in Oxford Economic Papers, closely followed by Public Economics, the Canadian Journal of Economics, the Journal of Development Economics and the Economic Journal. On the other end of the spectrum, there were no articles published in the Brookings Papers on Economic Activity and the Quarterly Journal of Economics, while 0.83 EFPs were published in the Review of Economics Studies and the Journal of Political Economy.

During the period covered by this study, the first two ERA Annual Reports were published. The total number of economics journals submitted for assessment for the period 2003–2008 covered by the first report published in 2010 amounted to 4170 articles. During this time, the number of articles published in the Diamond list with at least one Australian academic economist participating amounted to 123.2 articles, representing just 2.95% of all economics articles. The second ERA report, which covered the period 2005–2010, announced that the number of articles had increased to 5191.6 articles, an increase of about 24%. During that period, the total number of articles published with at least one Australian academic economist author in the Diamond list was 140.26 articles representing 2.7% of the total.

Table 2 presents the number of EFPs drawn from the Diamond list which have been published by Australian economics departments from 2001 to 2010. During the period 2001–2005, the Australian National University (ANU) published close to 30% of articles and the University of Melbourne accounted for about 20%; together they accounted for nearly half of all the journal articles published. Just over 80% of all articles were published by five universities, namely, the ANU, Melbourne, the University of New South Wales, the University of Queensland and Monash University. This period overlapped with the 2010 ERA exercise where only the University of Melbourne achieved a ranking of 5; the rest were ranked at 4.

In the period 2006–2010, which coincided with the ERA (2012) exercise, Melbourne turned in the best performance, publishing 22.3% of all articles, while the ANU slipped significantly but was still able to clinch second place publishing 15.2% of articles. Monash made significant gains to take third place from the University of New South Wales which fell to fourth place. In this period, the ANU, Monash and University of Technology Sydney (UTS) managed to improve their ERA rating from 4 (above world standard) to 5 (well above world standard). The University of Sydney, the University of Adelaide and the University of Western Australia (UWA) all improved markedly. This is consistent with the former two universities increasing their ranking from 3 to 4. The UWA maintained its 4 rating. That said, independent research by Reference DavidsonDavidson (2013) and Reference BlochBloch (2012) came to the same conclusion that the ERA exercises ranked the quality of economic research far lower than an objective assessment of the data would.

Overall, the leading universities – ANU, Melbourne, Monash, the University of New South Wales, the University of Sydney, the UWA, the University of Queensland and Adelaide University – accounted for nearly 84% of the total articles published in our catchment during this period. This performance was consistent with all of these institutions achieving an ERA ranking of 4 or 5, and the dominance of the top two universities was less pronounced.

We then read the abstract of all 327 articles with at least one author having an Australian economics department or business school affiliation over the period 2001–2010. The articles which addressed an Australian issue or made use of Australian data in their study as stated in the abstracts were then identified. When abstracts stated that data were used in a study, we read the entire article to ascertain whether Australian data were used in the study. Only 14 articles listed below met this criterion. This represents about 4.3% of articles:

  • Economica (2009) Hours of work and gender identity: Does part-time work make the family happier? Booth A (ANU) and Van Ours J (University of Essex);

  • Economic Inquiry (2008) Hedonic imputation and the price index problem: an application to housing. Hill R (University of New South Wales) and Melser D (Monash University);

  • Economic Inquiry (2006) The impact of high-tech capital on productivity: evidence from Australia. Connolly E (Reserve Bank of Australia) and Fox K (University of New South Wales);

  • Journal of Econometrics (2008) State dependence in youth labour markets experiences and the evaluation of policy interventions. Doiron D (University of New South Wales) and Gorgens T (ANU);

  • Journal of Econometrics (2008) Finite sample properties of the QMLE for the Log-ACD model: application to Australian stocks. Allen D (Edith Cowan University), Chan F (Curtin University), McAleer M (UWA) and Peiris S (University of Sydney);

  • Journal of Financial Economics (2004) The value of dividend imputation tax credits in Australia. Cannavan D, Finn F and Gray S (all authors from the University of Queensland);

  • Journal of Financial Economics (2008) Rights offerings, take-up, renounceability and underwriting status. Balachandran B, Faff R (Monash University) and Theobald M (University of Birmingham);

  • Journal of Public Economics (2004) An experimental evaluation of tax reporting schedules: a case of evidence-based tax administration. Wenzel M and Taylor N (ANU);

  • Journal of Public Economics (2009) Born on the first of July: an (un) natural experiment in birth timing. Gans J (University of Melbourne) and Leigh A (ANU);

  • Journal of Public Economics (2009) Propensities to engage in and punish corrupt behaviour: experimental evidence from Australia, India, Indonesia and Singapore. Cameron L, Erkal N and Gangadharan L (all University of Melbourne) and Chaudhuri A (University of Auckland);

  • Oxford Economic Papers (2004) The gender earnings gap: effects of institutions and firms – a comparative study of French and Australian private firms. Meng X (ANU) and Meurs D (University of Paris);

  • Oxford Economic Papers (2007) Identifying aggregate demand and aggregate supply shocks in a small open economy. Enders W (University of Alabama) and Hurn S (Queensland University of Technology);

  • Oxford Economic Papers (2008) Fertility, income inequality and labour productivity. Guest R and Swift R (both Griffith University);

  • Oxford Economic Papers (2010) Innovations and the determinants of company survival. Buddelmeyer H, Jensen P and Webster E (all authors from Melbourne Institute of Applied Economic and Social Research, University of Melbourne).

Given the dearth of such articles, we proceed to examine some possible reasons for this result. One finding which appears consistent with this result is that of Reference Das, Do and ShainesDas et al. (2013) who report that only 1.5% of all papers written about countries other than the US are published in first-tier journals; so there may be what we refer to as ‘the US effect’.

Another explanation may be that the hiring practices of Australian economics departments which publish in the most highly ranked economic journals have become internationalised. In this context, internationalisation is defined as the percentage of staff with overseas postgraduate qualifications. Several commentators have drawn attention to the increased Americanisation of Australian economics in the post-war period (Reference Coleman and BarrettColeman, 2015; Reference Lodewijks and StokesLodewijks and Stokes, 2014). It was Reference Groenewegen and McFarlaneGroenewegen and McFarlane (1990) who first suggested that American-trained Australian economists ‘… did not seem to be aware of particular institutional, cultural and historical characteristics of the Australian economy’ (p. 225). These authors were pessimistic about the prospects of the local economics profession, too, in terms of the focus on uniquely Australian research issues (Reference Groenewegen and McFarlaneGroenewegen and McFarlane, 1990: 237).

Reference MillmowMillmow (2010) tested the Groenewegen and McFarlane proposition of the Americanisation – or, put another way, internationalisation of Australian economics by looking first at the credentials of the staffing in the economics departments of the leading Australian universities. We now test the internationalisation explanation by examining those Australian economics departments ranked at world class and above world class in the 2012 ERA and comparing it with the Diamond list over the period 2001–2010.

Table 3 indicates that the two economics departments (Melbourne and the ANU) which produced over 40% of articles in the Diamond list between 2001 and 2010 also had the lowest number of staff possessing an Australian doctorate qualification. Furthermore, staff with overseas doctorates comprised 50% or higher of all staff with a PhD in seven of the nine economics departments which received an ERA ranking of 4 or 5 in the ERA (2012) exercise.

Table 3. Percentage of Australian PhDs as a percentage of all staff with PhDs in 1998 and 2008 and ERA (2012) rankings of 3–5.

Source: Adapted from Reference MillmowMillmow (2010: 89–90).

UTS: University of Technology Sydney; ERA: Excellence in Research Australia.

ERA rating: 5 = well above world standard; 4 = above world standard; 3 = at world standard. Only those ranked at ‘at world standard or above’ are shown.

Only Monash and UWA bucked that trend by hiring more Australian-trained PhDs. Another economics department to score 5 in the latest ERA, that is, UTS, went from having 44% to having 63% of its faculty with an overseas PhD. However, that department won its ranking by employing some high-performing individuals as an identifiable response to the ERA process. Twenty-five years on, it is apparent that Reference Groenewegen and McFarlaneGroenewegen and McFarlane (1990) were right in their prognostications about the Americanisation or, more accurately, internationalisation of Australian economics.

Reference Lodewijks and StokesLodewijks and Stokes (2014) have articulated the implications of this Americanisation process for the local economics profession. They argue that the incentive structures are ‘perverse’ by being oriented towards publishing in the top, predominantly American journals. This argument is consonant with our point that applied Australian research would not be readily acceptable to those journals. The race to publish in such journals is thought to have resulted in departmental imperatives such as sparing highly research-active staff from at least some of the humdrum of teaching and administration. Reference Lodewijks and StokesLodewijks and Stokes (2014) also reason that there will be less focus on local policy concerns that were specific only to Australia (p. 11). At a broader level, there is another long-term implication for the Australian academic economics profession that comes from aiming for the most prestigious journals. This article has shown that only the economics departments of leading Australian universities were around or above world standard. There is a distinct possibility that in successive ERA rounds, those economics departments that score poorly will be rationalised out of existence. There are several examples of Australian economics schools that have experienced this fate. If this pattern continues, economics degrees will become taught only at elite universities, as university administrations focus on where their research strengths lie. This process is already underway in Britain with 14 new universities withdrawing or closing down their economics programmes (Reference Johnston, Reeves and TalbotJohnston et al., 2014).

Conclusion

This article has sought to investigate the contribution of Australian economics departments to world-class academic economic research, particularly focused on Australia. Our findings based on a survey of all articles published in the Diamond list of economic journals over the period 2001–2010 is that about 4.3% of articles addressed an Australian issue or used Australian data in their study. The two highest ranked economics departments during that time produced over 40% of the relevant articles but also had the lowest number of staff possessing an Australian doctoral qualification. Furthermore, staff with overseas doctorates comprised 50% or more of all staff in seven of the nine economics departments which received an ERA ranking of 4 or 5 in 2012.

Aiming to publish in the highest ranked journals is certainly one indicator of the quality of research and a worthwhile goal in its own right, and it may be the case that greater internationalisation of our economics departments may have assisted with reaching this objective. However, up till now, there has been less recognition and debate about whether our best academic research should also be directed to the equally important objective of examining the most pressing economic issues confronting Australia and providing a much broader return for public funding – also a stated aim of the ERA exercise. The fact that this has not yet occurred raises the question of whether the objectives of ‘impact’ and ‘outcome’, as currently defined, are complementary or in fact contradictory. The ‘US effect’ and the internationalisation of the staff in Australian economics departments suggest that trade-offs could exist. Further discussion on how a better balance can be struck between the twin objectives of the ERA is needed. We believe that it is worth investigating whether this finding may also apply to other countries which use a similar research metric in determining research quality.

Acknowledgement

We acknowledge the assistance of two anonymous referees. We wish to thank Jerry Courvisanos for earlier comments on this paper.

Funding

The author(s) received no financial support for the research, authorship and/or publication of this article.

Footnotes

1. Papers were proportionally attributed to each journal and institution based upon the number of authors, that is, 1 Equivalent Full Paper (EFP) per sole authored article, 0.5 EFP allocated per author where two co-authors, 0.33 EFP where three co-authors and 0.25 EFP when four authors.

References

ACIL Allen Consulting (2013) Benefits realisation review of excellence on research for Australia. Final Report, Report to the Australian Research Council, 27 September. Brisbane: ACIL Allen Consulting.Google Scholar
Adler, NJ and Harzing, AW (2009) When knowledge wins: transcending the sense and nonsense of academic rankings. The Academy of Management Learning and Education 8(1): 72–9.CrossRefGoogle Scholar
Australian Government (2015) National Innovation and Science Agenda: researchers and universities. Available at: http://www.innovation.gov.au/audience/researchers-and-universities (accessed 29 September 2016).Google Scholar
Australian Research Council (ARC) (2015) Glossary of terms for research impact. Available at: http://www.arc.gov.au/sites/default/files/filedepot/Public/ARC/Research%20Impact/Glossary_for_research_impact.pdf (accessed 20 May 2016).Google Scholar
Australian Research Council (ARC) (2016) Excellence in research for Australia. Available at: http://www.arc.gov.au/excellence-research-australia (accessed 27 July 2016).Google Scholar
Bloch, H (2012) An uneven playing field: rankings and ratings for economics in ERA 2010. Economic Papers 31(4): 418427.CrossRefGoogle Scholar
Coleman, W (2015) A young tree dead? The story of economics in Australia and New Zealand. In: Barrett, V (ed.) Routledge Handbook of the History of Global Economic Thought. London: Routledge, pp. 281293.Google Scholar
Das, J, Do, Q, Shaines, K, et al . (2013) US and them: the geography of academic research. Journal of Development Economics 105: 112130.CrossRefGoogle Scholar
Davidson, S (2013) Excellence in research for Australia: an audit of the applied economics rankings. Agenda 20(2): 520.Google Scholar
Diamond, A (1989) The core journals of economics. Current Contents 21(1): 411.Google Scholar
Excellence in Research Australia (ERA) (2010) Annual report. Available at: http://www.arc.gov.au/era/era_2010/outcomes_2010.htm (accessed 16 November 2014).Google Scholar
Excellence in Research Australia (ERA) (2012) Annual report. Available at: http://www.arc.gov.au/era/era_2012/outcomes_2012.htm (accessed 16 November 2014).Google Scholar
Groenewegen, P and McFarlane, B (1990) A History of Australian Economic Thought. London: Routledge.Google Scholar
Hussain, S (2015) Journal list fetishism and the ‘sign of 4’ in the ABS guide: a question of trust? Organization 22(1): 119138.CrossRefGoogle Scholar
Johnston, J, Reeves, A and Talbot, S (2014) Has economics become an elite subject for elite UK universities? Oxford Review of Education 40(5): 590609.CrossRefGoogle Scholar
Lodewijks, J and Stokes, T (2014) Is academic economics withering in Australia? Agenda 21(1): 6988.Google Scholar
Marginson, S (2009) The knowledge economy and higher education: rankings and classifications, research metrics and learning outcomes measures as a system for regulating the value of knowledge. Higher Education Management and Policy 21(1): 115.CrossRefGoogle Scholar
Millmow, A (2010) The changing sociology of the Australian academic economics profession. Economic Papers 29(1): 8795.CrossRefGoogle Scholar
Nedeva, M, Boden, R and Nugroho, Y (2012) Rank and file: managing individual performance in university research. Higher Education Policy 25(3): 335360.CrossRefGoogle Scholar
Neri, F and Rodgers, J (2015) The contribution of Australian academia to the world’s best economics research: 2001 to 2010. Economic Record 91(292): 107124.CrossRefGoogle Scholar
Rafols, I, Leydesdorff, L, O’Hare, A, et al . (2011) How journal rankings can suppress interdisciplinary research: a comparison between innovation studies and business and management. Research Policy 41(7): 12621282.CrossRefGoogle Scholar
Stern, LN (2016) Building on success and learning from experience: an independent review of the research excellence framework, July. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/541338/ind-16-9-ref-stern-review.pdf (accessed 1 September 2016).Google Scholar
UNSW Media Office (2016) UNSW goes global to recruit 600 of the best research minds, 4 March. Available at: http://newsroom.unsw.edu.au/news/general/unsw-goes-global-recruit-600-best-research-minds (accessed 2 September 2016).Google Scholar
Wedlin, L (2006) Ranking Business Schools: Forming Fields, Identities and Boundaries in International Management Education. Cheltenham: Edward Elgar.CrossRefGoogle Scholar
Wilkins, S and Huisman, J (2012) UK business school rankings over the last 30 years (1980–2010): trends and explanations. Higher Education 63(3): 367382.CrossRefGoogle Scholar
Willmott, H (2011) Journal list fetishism and the perversion of scholarship: reactivity and the ABS list. Organization 18(4): 429442.CrossRefGoogle Scholar
Figure 0

Table 1. Number of Equivalent Full Papers (EFPs) published by Australian economics academics in the Diamond list by journal 2001–2010.

Figure 1

Table 2. Number of Equivalent Full Papers (EFPs) published by Australian economics departments in the Diamond list 2001–2010.

Figure 2

Table 3. Percentage of Australian PhDs as a percentage of all staff with PhDs in 1998 and 2008 and ERA (2012) rankings of 3–5.