Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-28T03:37:55.080Z Has data issue: false hasContentIssue false

Authors' reply

Published online by Cambridge University Press:  02 January 2018

Rowena Jacobs
Affiliation:
Centre for Health Economics, University of York, UK. Email: rowena.jacobs@york.ac.uk
Eliana Barrenho
Affiliation:
Imperial College Business School, Imperial College London, UK
Rights & Permissions [Opens in a new window]

Abstract

Type
Columns
Copyright
Copyright © Royal College of Psychiatrists, 2011 

Power calculations are seldom used in the multiple regression context, particularly with panel data and population-level data. These tend to be rather made with trial-based data to estimate appropriate sample sizes. Many would argue that post hoc power calculations are misleading and irrelevant. Reference Levine and Ensom1Reference Fogel3 Nevertheless, a post hoc power calculation based on the ordinary least squares model which uses the total number of valid cases used in the analysis, the total number of predictors in the model, the model R-squared, and the assumed P-value (set at 0.05), suggests that for all models the power is 1.00. By convention, this value should be greater than or equal to 0.80.

More importantly though, the benefit of the difference-in-difference methodology is that it provides for more precise estimates than the previous analysis and also allows for the simultaneous inclusion of covariates such as the team fidelity criteria (e.g. crisis resolution and home treatment teams (CRHTTs) offering a 24-hour service) as well as overall time trends. There are fundamental differences between the two types of analyses with the difference-in-difference methodology being a far more potent and robust policy evaluation tool.

We agree that future studies should ideally look at analysing admissions (and potentially other factors) at CRHTT level. We explored the possibility of doing this by contacting several teams to ask about their geographical boundaries, but found, surprisingly, that many teams were in fact unable to clearly delineate their geographical ‘patch’ and that even if they could define their current boundaries, these had often changed over time, making an analysis of long-term trends with difference-in-difference methodology unfeasible. Moreover, a large-scale national longitudinal study would require data from before the policy change (circa 1998) to effectively assess the policy impact, for which routine administrative data is more suited than data from individual electronic records systems, which have huge variation in detail, quality and method of collection.

References

1 Levine, M, Ensom, MH. Post hoc power analysis: an idea whose time has passed? Pharmacotherapy 2001; 21: 405–9.Google Scholar
2 Hoenig, JM, Heisey, DM. The abuse of power: the pervasive fallacy of power calculations for data analysis. Am Stat 2001; 55: 1924.Google Scholar
3 Fogel, J. Post hoc power analysis: another view. Pharmacotherapy 2001; 21: 1150.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.