The Realising Ambition programme uses a lot of technical terms – from ‘replication’ to ‘robust experimental evaluation’ and everything in between!
Below is a handy guide for how we define some of our common phrases:
Those aspects of a service that may be altered, refined or adapted in order to foster greater engagement, retention or satisfaction of those in receipt of a service (yet do not disrupt the underlying core mechanisms of the service or intervention).
A dimension of fidelity. Refers to whether the core components of a programme are delivered as designed, to those who are eligible for the service, by appropriately trained staff, with the right protocols, techniques and materials and in the prescribed locations or contexts.
In the context of evaluation, this refers to whether or not changes in beneficiary outcomes may be explained or accounted for by a service or activity. A lack of attribution means that it is not possible to know whether or not any changes in beneficiary outcomes were the direct result of the service or activity, or would have otherwise occurred.
- Business case
A business case provides justification for a proposed project or programme. Ideally it includes an analysis of costs and likely benefits, as well as a detailed budget, and also evidence of the need and demand for the service.
- Client management information system
A database that allows projects to view their real time data on outcomes, fidelity monitoring, quality assurance processes and other delivery data such as costs and staffing. High quality systems will typically allow users to view data in a visual format (graphs, charts etc) and enable data to be analysed and presented in a variety of ways (by delivery year, project type, outcome etc). These systems are useful for monitoring children’s outcomes as they progress through a programme, monitoring the quality of delivery across multiple sites, and testing the results of adaptations to programme components.
Responsible for the strategic allocation of public funds to projects, programmes or services that best address the needs of children, young people and families in their geographical and service area (for example Children’s Services, Health, Education, Youth Justice etc). The priorities of commissioners are to engage services that represent good value for money as well as quality delivery and increasing the likelihood of positive impact.
- Control group / comparison group
A group of participants within an experimental evaluation who do not receive the programme or service under evaluation, in order to measure the outcomes that would have occurred without the presence of the programme.
- Core components
The key activities that make the service work. Put another way, the specific aspects or mechanisms of a service that lead to the desired change in outcomes. For a service to be replicated successfully, providers need to be clear about what can and cannot be changed.
Refers to actions taken to reduce future costs. Cost-avoidance as a value is the difference between what is actually spent and what would have been spent had no avoidance measures been implemented.
- Cost-benefit analysis
The estimation of financial returns on an investment or service. Returns are typically estimated for individual recipients of service, agencies providing the service and the state. Cost-benefit analyses rely upon accurate cost information and robust evidence of impact (ideally from experimental evaluations). Cost-benefit analysis may produce a calculation of net cost (benefits minus cost) or the ratio of costs and benefits.
- Data sharing
The lawful and responsible exchange of data and information between various organisations, people and technologies.
- Delivery and Impacting reporting system / Client management information system
Typically a web-based system that allows projects to view their real time data on outcomes, fidelity monitoring, quality assurance processes and other delivery data such as costs and staffing. These systems are useful for monitoring children’s outcomes as they progress through a programme, monitoring the quality of delivery across multiple sites, and testing the results of adaptations to programme components.
In the context of social interventions the number of individuals who (a) match the particular target group within a given population and (b) actually want to participate in the programme.
- Early intervention
Intervening in the early stages in the development of difficulties (not necessarily at an early age). Early intervention activities or services seek to stop the escalation of difficulties with the aim of promoting subsequent health and development.
- Eligible young people
Those young people who fit the target criteria for a specific service or programme. This could be based upon factors such as their age or gender, or relate to the difficulties they may be experiencing such as homelessness, conduct disorder, or educational problems. Those young people who are eligible for a service or programme should be the same young people who are likely to benefit most from receiving it.
Various aspects of a programme can be evaluated, including the process of delivery, user satisfaction and impact. Here evaluation refers to the use of social research procedures to investigate systematically the effectiveness of programmes or services in terms of improving children’s health and development.
- Evidential tapestry
Replication requires a range of evidence to support both its justification, and to maintain high quality delivery. For example, not only is evidence of impact important for understanding the outcome of a service, but it is also useful in justifying the replication of a service in a new area. Alongside this can be evidence of the need for the service and demand for it in a local area. Evidence can also relate to delivery quality and fidelity to the model. Different types of evidence, all varying in quality and utility can provide answers to a range of questions helpful to practitioners and managers delivering services for children and families. When viewed holistically together, this overview of the breadth, depth, and quality forms an ‘evidential tapestry’.
Generally speaking, evidence is information that acts in support of a conclusion, statement or belief. In children’s services, this tends to be information indicating that the service works, ie is achieving the intended change in outcomes. We take a broader view in that evidence may support or challenge other aspects of service delivery, such as quality of implementation, reach and value for money.
- Evidence-based programme
A discrete, organised package of practices or services – often accompanied by implementation manuals, training and technical support – that has been tested through rigorous experimental evaluation, comparing the outcomes of those receiving the service with those who do not, and found to be effective, ie it has a clear positive effect on child outcomes. In the Standards of Evidence developed by the Dartington Social Research Unit, used by Project Oracle, NESTA and others, this relates to ‘at least Level 3’ on the Standards.
- Evidence-Confidence Framework
The Realising Ambition ‘Evidence-Confidence Framework’ is a tool that can be used to help judge the strength and overall balance of different types of evidence for a particular service being replicated, and to identify areas of development and opportunity. It is structured around a five-part definition of successful replication: (i) a tightly defined service; (ii) that is effectively and faithfully delivered to those that need it; (iii) evidence is used to learn and adapt, as required; (iv) there is confidence that outcomes have improved; and (v) the service is cost-beneficial and sustainable. A simple five-point colour grading system is used to grade the strength and quality of each type of evidence: the lightest blue representing the strongest evidence and the darkest blue the weakest.
- Exposure / Dosage
Refers to the ‘amount’ of programme or service a person receives. This could be the number of total sessions attended, the length of those sessions, or how frequently they took place.
- Experimental evaluation / Robust evidence of impact
An evaluation that compares the outcomes of children and young people who receive a service to those of a control group of similar children and young people who do not. The control group may be identified by randomly allocating children and young people who meet the target group criteria – a randomised controlled trial or RCT – or by identifying a comparable group of children and young people in receipt of similar service – a quasi-experimental design or QED.
- Fidelity / Faithful delivery
The faithfulness to the original design and core components of a service. This can be assessed by fidelity monitoring tools, checklists or observations.
- Fidelity monitoring tools
Typically, these are checklists or observations which enable practitioners, programme managers, or researchers to monitor whether or not a programme is being delivered faithfully, according to its original design.
- Formative evaluation
An evaluation that takes place before or during the implementation of a programme or service to improve the quality of its design and delivery. This type of evaluation is useful for providing on going information and feedback to staff, and can also be useful in observing changes that take place after adaptations or modifications to a programme have been made (see also summative evaluation).
Typically an organisation – foundation, charitable trust, or other philanthropic entity – that seeks to support social change through the funding of programmes, projects or services aimed at addressing “social problems”. Usually these organisations are focused on particular outcomes such as reducing inequality and homelessness, tackling the causes of gang violence, improving mental health support etc.
The impact (positive or negative) of a programme or service on relevant outcomes (ideally according to one or more robust impact evaluations).
The process of putting a service into practice. Implementation science explores theory and evidence about how best to design and deliver effective services to people.
- Implementation handbook
A document that describes the processes and agreements for replicating an intervention in a new context. Typically it would include information on the structure and content of the programme, its intended outcomes and the resources needed to deliver it.
- Informed consent
In the context of routine outcome monitoring, the freely given agreement to compete questionnaires in the knowledge about what data is to be collected and how it will be used.
The process of translating a new idea into a service that creates value for the intended beneficiaries and which can be funded or commissioned.
- Logic model
A typically graphical depiction of the logical connections between the resources, activities, outputs and outcomes of a service. Ideally these connections will have some research underpinning them. Some logic models also include assumptions about the way the service will work.
A document that covers all the things about a programme or service that are relevant wherever and whenever it is being implemented. This includes the research base for the programme, the desired outcomes, the logical connection between activities and these outcomes, the target group and all of the relevant training or delivery materials (see also ‘Implementation handbook’).
In relation to services for children and families, this refers to how many individuals in a specified population match the target group for the programme.
Outcomes refer to the ‘impact’ or change that is brought about, such as a change in behaviour or physical or mental health. In Realising Ambition, all services seek to improve outcomes associated with a reduced likelihood of involvement in the criminal justice system.
- Outcome monitoring tools
Within the context of services for children and their families, these are typically questionnaires, structured interviews, or observations completed by young people or their parents, practitioners or researchers on a range of indicators of emotional and physical well-being and development.
- Pre-service intervention questionnaire
In the context of routine outcome monitoring or experimental evaluation, a baseline questionnaire completed shortly before any service provision takes place.
- Post-service intervention questionnaire
In the context of routine outcome monitoring or experimental evaluation, a follow-up to baseline questionnaires completed shortly after the conclusion of service provision (further follow-ups may also be undertaken).
Activities or services designed to stop difficulties or possible impairments from happening in the first place.
- Promising service / intervention
A tightly defined service, underpinned by a strong logic model, that has some indicative – though not experimental – evidence of impact. In the Standards of Evidence developed by the Dartington Social Research Unit, used by Project Oracle, NESTA and others, this relates to ‘Level 2’ on the Standards.
- Randomised Controlled Trial (RCT)
An evaluation that compares the outcomes of children and young people who receive a service to those of a control group of similar children and young people who do not. Within an RCT the control group is identified by randomly allocating children and young people who meet the target group criteria to either the service receipt or control groups.
- Rapid cycle testing
An approach, widely used in healthcare innovation, that implements and then tests small changes in order to accelerate service improvement efforts. It builds upon and operationalises the ‘Plan > Do > Study > Act’ (PDSA) cycle. It promotes rapid iteration in order to support improvement and delivery at scale.
- Realising Ambition Outcomes Framework
A measurement framework and set of associated tools designed to support delivery organisations to identify and measure the beneficiary outcomes most relevant to their work. The Realising Ambition framework comprises five broad outcome headings: (i) improved engagement with school and learning; (ii) improved behaviour; (iii) improved emotional well-being; (iv) stronger relationships; and (v) stronger communities. Under each of these five headings are a number of specific indicators – 31 in total. Each indicator is accompanied by a short standardised measure that may be completed by children and young people before and after service delivery.
In the context of outcome measurement, the degree to which a standardised measure consistently measures what it sets out to measure.
Delivering a service into new geographical areas or to new or different audiences. Replication is distinct from scaling-up in that replication is just one way of scaling ‘wide’ – ie reaching a greater number of beneficiaries in new places. (See definition of ‘scale’).
- Routine outcome monitoring
The routine measurement of all (or a sample) of beneficiary outcomes in order to: (i) test whether outcomes move in line with expectations; (ii) inform where adaptations may be required in order to maximise impact and fit the local delivery context; and (iii) form a baseline against which to test such adaptations.
A service is ‘at scale’ when it is available to many, if not most, of the children and families for whom it is intended within a given jurisdiction. Usually this requires that it be embedded in a public service system. Service delivery organisations can scale ‘wide’ by reaching new places, or scale ‘deep’ by reaching more people that might benefit in a given place. Replication is one approach to scaling wide.
- Service designer
Within the context of services for children and families, any individual or organisation responsible for conceiving, planning and constructing a service or programme aimed at preventing or ameliorating the difficulties or potential difficulties of children and families. Ideally service designers balance science and knowledge of ‘what works’ alongside expertise in user engagement and co production.
- Standardised measure
A questionnaire or assessment tool that has been previously tested and found to be reliable and valid (i.e. consistently measures what it sets out to measure).
- Standards of Evidence
The Standards of Evidence are set of criteria by which to judge how tightly defined and ready for wider replication or implementation a particular service is. They also assess the strength and quality of any experimental evidence underpinning a service. The standards form the basis of the Investing in Children ‘what works’ portal for commissioners that provides a database of proven services for commissioners of children’s services. The Standards have also underpinned numerous others, including the Project Oracle and NESTA Standards of Evidence.
- Start-up costs
The total cost of setting up a project, programme or service in a new area. Start-up costs typically include capital costs such as IT equipment, planning and training costs, consultancy, recruitment, licensing and legal costs.
- Summative evaluation
An evaluation carried out typically at the end of a delivery cycle in order to establish the outcomes of a programme against its original objectives, how effective adaptations may have been, and to inform decisions around whether a programme should continue to be delivered or whether further adaptations should be made (see also ‘formative evaluation’).
- Surface adaptations
Aspects of the service that can be adapted to fit local contexts. These are peripheral components that do not directly alter the core aspects of the service that make it work. Surface adaptations may allow providers in other areas to make the service ‘their own’ and better serve the needs of local populations.
- Tightly defined service
Successful interventions are clear about what they are, what they aim to achieve and with whom, and how they aim to do it. A tightly defined service is one which is focused, practical and logical.
- Unit costs
The cost of everything required to deliver a programme to a participant or a family. A unit cost is normally expressed as an average cost per child or family, but can also be expressed as a range (for example, unit costs ranging for “high need” to “low need” cases).
- Universal service
A service or activity that is provided to all within a given population or location. There are no inclusion or exclusion criteria.
- User engagement
A dimension of fidelity. This refers to the extent to which the children, parents or families receiving a programme are engaged by and involved in its activities and content. How consistently do participants stick with the programme? Do they attend? Do they like it? Do they get involved? Without high levels of user engagement, it is unlikely that programmes will achieve their desired impact.
- User satisfaction
Refers to whether children and families in receipt of a particular service are satisfied with the delivery and outcomes of that service. Did they feel they received enough sessions, that they established a good relationship with practitioners? Did they feel like the programme helped to deal with the difficulties they were facing, or prevented the occurrence of others? User satisfaction is typically captured upon completion of a service or programme.
In the context of outcome measurement, the degree to which a standardised questionnaire or tool measures what it sets out to measure (i.e. it does not inadvertently measure some related but spurious construct).
Views is a project management and outcome reporting platform, designed to demonstrate social impact and value in the context of revised public sector spending priorities and reforms to public sector provision. Its aim is to improve performance management in the delivery of public / children’s services and was born out of a desire to develop a scalable approach to process monitoring and outcome measurement so that the richer forms of evaluation and impact assessment could be made available to the widest possible number of delivery organisations.