This website uses cookies to help us understand the way visitors use our website. We can't identify you with them and we don't share the data with anyone else. If you click Reject we will set a single cookie to remember your preference. Find out more in our privacy policy.

Further reading

We draw on many sources in the production of the Realising Ambition Programme Insights, including those listed below. The list will grow as the series of Programme Insights develops.

  • Aylott, M., McNeil, B., & Hibbert, T. (2013). Noticing the change: A Framework of outcomes for young people. London: The Young Foundation and the Catalyst Consortium.
  • Bamberger, M., Rugh, J., & Mabry, L. (2012). RealWorld evaluation: Working under budget, time, data, and political constraints. Thousand Oaks, USA: SAGE.
  • Blase, K., & Fixsen, D. (2013). Core intervention components: Identifying and operationalizing what makes programs work. Washington, DC: Office of the Assistant Secretary for Planning and Evaluation, Office of Human Services Policy, U.S. Department of Health and Human Services.
  • Cabinet Office (2014). Outcomes Frameworks: A guide for providers and commissioners of youth services. London.
  • Charities Evaluation Services (2013). The CES Resource Guide: Evaluating outcomes and impact. London.
  • Cody, S., & Asher, A. (2014). Smarter, Better, Faster: The Potential for Predictive Analytics and Rapid-Cycle Evaluation to Improve Program Development and Outcomes. Brookings Institute.
  • Dartington Social Research Unit. (2013). Design and Refine: Developing effective interventions for children and young people. Dartington, England: Dartington Social Research Unit.
  • Dartington Social Research Unit. (2013). Investing in Children ‘What Works’ Standard of Evidence. Dartington: DSRU. See: http://investinginchildren.eu/standards-evidence
  • Durlak, J. A., & DuPre, E. P. (2008). Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting
    Implementation. American Journal of Community Psychology, 41, 327-350.
  • Fixsen, D., et al. (2005). Implementation Research: A synthesis of the Literature. Tampa, USA: The National Implementation Research Network (FMHI Publication #231).
  • Garland, A., Kruse, M., & Aarons, G. (2003). Clinicians and outcome measurement: what’s the use? Journal of Behavioural Health Services Research. 30, 393–405.
  • Gloster, R., Aston, J., & Foley, B. (2014). Evaluation of Project Oracle. London: Institute for Employment Studies and NESTA
  • Gottfredson, D., Cook, T., Gardner, F, Gorman-Smith, D., Howe, G., Sandler, I. and Zafft, K. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, pp.1-34.
  • Hall, C, et al. (2014). Implementation of routine outcome measurement in child and adolescent mental health services in the United Kingdom: a critical perspective. European Journal of Adolescent Psychiatry. 23, 239 – 242.
  • Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing Fidelity with Flexibility and Fit: What Do We Really Know about Fidelity of Implementation in Schools? Exceptional Children, 79, 2, 181–193.
  • Inspiring Impact: http://inspiringimpact.org
  • Johnson K, Gustafson D, Ewigman B, et al. (2015). Using Rapid-Cycle Research to Reach Goals: Awareness, Assessment, Adaptation, Acceleration. AHRQ Publication No. 15-0036. Rockville (MD): Agency for Healthcare Research and Quality.
  • Kazimerski, A., and Pritchard, D. (2014). Building your measurement framework: NPC’s four pillar approach. London.
  • LUMA Institute (2012). Innovating for people: handbook of human-centered design methods. Pittsburgh, Pennsylvania: LUMA Institute, LLC.
  • Little, M., & Edovald, T. (2012). Return on Investment. The Evaluation of Costs and Benefits of Evidence-Based Programs. Psychosocial Intervention, 21, 2, 215-221.
  • International Centre for Social Franchising. (2015). Social Replication Toolkit (Version 0). London, England: International Centre for Social Franchising (ICSF).
  • Mulgan, G., & Rushanara, A. (2007). In and out of sync: the challenge of growing social innovations. London, England: The Young Foundation.
  • Nutley, S., Powell, A., & Davies, H. (2013). What counts as good evidence? Provocation paper for the Alliance for Useful Evidence. London, England.
  • Paulsell, D., Del, G. P., & Supplee, L. (2014). Supporting replication and scale-up of evidence-based home visiting programs: assessing the implementation knowledge base. American Journal of Public Health, 104, 9, 1624-32.
  • Petticrew, M. and Roberts, H. (2003). Evidence, hierarchies, and typologies: horses for courses. Journal of Epidemiology and Community health. 57: 527–529.
  • Provost L, Bennett B. (2015). What’s your theory? Driver diagram serves as tool for building and testing theories for improvement. Quality Progress. July: 36-43.
  • Puddy, R. W., Wilkins, N., Centers for Disease Control and Prevention (U.S.), & National Center for Injury Prevention and Control (U.S.). (2011). Understanding evidence: A guide to the continuum of evidence of effectiveness.
  • Puttick, R., & Ludlow, J. (2012). Standards of Evidence for Impact Investing. London: NESTA
  • Spoth, R.,et al. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: the translation science to population impact (TSci Impact) framework. Prevention Science: the Official Journal of the Society for Prevention Science., 14, 4, 319-51.
  • Stern, E. (2015). Impact evaluation: A guide for commissioners and managers. Bond for International Development. London, England.
  • Wolpert, M., Cheng, H. and Deighton, J. (2014). Review of four Patient Reported Outcome Measures (PROMs): SDQ, RCADS, C/ ORS and GBO: their strengths and limitations for clinical use and service evaluation. Child and Adolescent Mental Health. Doi: 10.1111/camh.12065