Building outcome measurements into crisis care – – Demonstrating and learning how we get individuals in crisis well.

This project has shown that a very busy and mobile team with a very high turnover can nevertheless collect ‘live’ information that helps deliver better care. We are now in the fascinating stage of trying to understand the longer-term trends of what we see, to help us think on how we do our work better, and help our clients more effectively. To return to the original issue that crisis services cannot or have not shown changes in clinical care, we now like to say that “we can!”: we are very proud of this.


  • From start: Yes
  • During process: Yes
  • In evaluation: Yes


  • Peer: Yes
  • Academic: No
  • PP Collaborative: Yes

What We Did 

Crisis Resolution and Home Treatment Teams (CRHTTs) were mandated by the NHS Plan in 2000, and now provide coverage across all regions of the UK. However, their evidence base is poor: we carried out the first systematic review of their functions (Crisis teams: systematic review of their effectiveness in practice. Carpenter, Falkenburg, White, Tracy. The Psychiatrist, 2013; 37:232-237), and this showed that they appeared to have limited effectiveness. However, detailed exploration led us to realise that the scientific literature had focused almost exclusively on processes: duration and rates of admission, use of the Mental Health Act and so forth.

These seemed unusual, and not patient-centred, so we undertook a second piece of work, the first qualitative study of a ‘typical’ CRHTT where we asked service users what they thought mattered in crisis care (Home Treatment Teams: what should they do? A qualitative assessment of patient opinions. Carpenter & Tracy, Journal of Mental Health, 2015; 24(2): 98-102). Perhaps unsurprisingly, service users wanted to know if such teams got people well, how they helped individuals’ lives in various social domains, and if they were caring.

This led to the critical realisation that, like almost every CRHTT in the country – and indeed every mental health service in the UK – we were unable to answer this most fundamental issue: were we getting people well, and if so, how? As a result we designed a digital version of three outcome measurements: a clinical scale (the ‘CORE 10’) that people complete, describing their mental state or how they’re feeling; a social scale (the ‘CANSAS’) that measures factors such as employment, relationships, education, childcare, finances and so forth; and a patient feedback form on satisfaction with care. Our clients complete these on commencing and finishing care with our team – more often as necessary.

This allows us to do several things. Firstly it feeds from our electronic tablets (we ensured the ability to get information ‘on the go’, a critical issue for crisis care) to a database in almost real-time, and allows us to get a live snap-shot of our client list in terms of how they are feeling, not just ‘numbers’ of patients. Through colour coding scores, we can get a very real sense of how well or unwell people are at that time, and we do our case-load reviews based on ‘morbidity’, moving away from the old (wholly inappropriate) standard model of doing a review alphabetically! Secondly, we can ‘click’ on an individual’s name, and expand all their data.

This allows us to see changes with time; in other words, is someone getting better or worse, and in what way? This helps us plan medical reviews, transfers to community services, and admission to hospital where this is necessary. We also like to share the information with our clients: it gives a real sense of ‘caring’ and being involved, and it can be hugely helpful to see improvement or areas of remaining concern. Finally, it offers the opportunity to think at longitudinal trends and pressures: rather than say “how many people does a crisis service see?”, we can ask “what types of problems do people come to a crisis service with, and how are they helped?”

This project has shown that a very busy and mobile team with a very high turnover can nevertheless collect ‘live’ information that helps deliver better care. We are now in the fascinating stage of trying to understand the longer-term trends of what we see, to help us think on how we do our work better, and help our clients more effectively. To return to the original issue that crisis services cannot or have not shown changes in clinical care, we now like to say that “we can!”: we are very proud of this.

Wider Active Support 


We have worked with our patients, the university King’s College London, the commercial company ‘Inkwrx’, and our Trust Business Intelligence and Performance team. As noted above, the first issue was service user involvement, which we will detail in the next section. The team Consultant (and nominator) is also a university lecturer at the Institute of Psychiatry, Psychology, and Neuroscience, King’s College London. He brought two MSc students to undertake research dissertations on the initial exploration of CRHTT roles and evidence, patient needs and hopes regarding care, and potential outcome measurements; these resulted in the peer-reviewed scientific publications in the first section.
A process involving digitisation of outcomes is complex, and we had excellent support from our Trust Business Intelligence and Performance team, particularly in the issues of safe, secure, and confidential data capture, clear and informative visual representation of our ‘dashboards’, and analysis of our data findings.

Next, we realised early on that despite a very competent technical business and ICT department, that as an NHS organisation we lacked sophistication in ‘app design’ and recruited the external commercial partner Inkwrx who helped with the first iteration of our tablet forms for data collection.

Finally, completing a circle, we have fed our findings – positive and negative – back to our Trust ‘Patient Experience Group’, which has service user representation, and our Trust Quality Board (attended by our Executive) to assist learning and refinement of our project.


Service user involvement was critical from the start of our project. Having completed our systematic review, it was obvious to us that the missing aspect in all the research on crisis teams was – astonishingly – patient voices on what we did, could do, and should do. This led to our extensive qualitative work with our clients, that was published last year (Home Treatment Teams: what should they do? A qualitative assessment of patient opinions. Carpenter & Tracy, Journal of Mental Health, 2015; 24(2): 98-102). It was our service users who asked us, completely reasonably, why didn’t we know if we got people well, and why didn’t we check and record the problems in their lives, to try measure change. These core demands drove our project from the beginning.

A pleasing follow-on is our experience that in general, our service users have been very happy to complete the forms, and do not regard them as part of the ‘routine data collection’ that so often occurs in mental health care and electronic records. Of course, completing outcome measurements is not mandatory, but we find that when we tell people why we are interested in knowing more about them, that they are generally more than happy to provide information, and indeed a significant amount feedback that they are gratified to find we are actively interested in learning more about them.

An important factor from the beginning has been getting staff buy-in on this project. We reflected for quite some time on our qualitative study of patient opinion of the care we provided. Whilst some of the feedback was a challenge to us, it was a very positive process to think on what this really meant (and of course we also had lots of very good feedback), and how we could learn and grow. As team consultant I was especially proud of seeing a team open to new ideas and reflecting on feedback; it is a sign of a service ready for growth and change. Staff were, and remain, upbeat on saying “let’s get better at what we do”; they are always the first to show visitors to our team how we use our outcome measurements!

Looking Back/Challenges Faced

Looking back, there is no doubt we underestimated the technological challenges faced in what seemed, naively, to us to be the ‘simple’ process of making an app! It was a large and complex process, as was designing the data collection and ‘dashboard screen’. With hindsight, we should have allowed more time, and a more realistic timeframe for doing this; it proved to be frustrating for us as a team thinking we were ‘almost there’ so many times. We learned as a service that implementing such projects takes considerable patience as well as enthusiasm.

We have faced numerous challenges in our project, some of which will be fruitful learning points for the expansion of our pilot, which we note below. Firstly, we faced conceptual challenges: which ‘outcomes’ should one measure, and why? We did not want to get bogged down with dozens of scales, one for each illness or problem, and we did not want to get focused just on ‘symptoms’ but wanted to see people in their broader environment and life – something that can get lost in a crisis service. This led us to choose a ‘pan-diagnostic’ scale – the CORE 10 – which we can administer to people with any mental health problem, or none at all, just distress. Secondly, we found the CANSAS to be a great scale for capturing that social aspect, such as relationships, education, employment etc. Quite a bit of thought was required for this seemingly simple evaluation: a MSc student helped over an entire summer project, determining the validity, reliability and so forth of the various potential measures (and if they would cost us money to use).

As described earlier, we faced technological challenges: choosing scales was only the beginning! We had to digitise them into tablet ‘apps’, sort out safe and confidential data storage, and a suitable dashboard. As well as having a successful partnership with a commercial company, we have been very fortunate to have a forward-thinking and patient-centred business and ICT department, who saw the clinical value from the beginning and worked incredibly hard with us.
A final challenge to date has been reminding ourselves as clinicians, and our service users, why we collect this information, and being thoughtful in how we use it.

In the future, we see our biggest issue being learning to become smart at evaluating what will be a big data set, to think creatively about what it is telling us about the care we provide, and how we can do things better.


Our pilot has been, we believe, a big success, and we are keen not just to continue it within our own team, but to broaden it across our directorate. We have involved our Trust Board and our Directorate leads from the beginning, as well as our business and ICT department. We have been given good support and seed funding for this. We know that for this to continue and grow, we will need more support, and that the way to obtain this will be to demonstrate the utility of our outcomes, and what we learn as a service. We are confident that we can do this.

There is a move in mental health commissioning away from ‘block contracts’ to being paid by results. With our Chief Executive we presented our pilot findings at a meeting with our three London CCGs (Bromley, Bexley, and Greenwich) at a commissioning meeting chaired by NHS London. The panel were incredibly impressed with our model, and NHS London have encouraged the CCGs to push our work to continue and grow. As a follow-up to this meeting, the Regional Director for Operations and Delivery in London, Simon Weldon, has requested to visit our service on June 17th of this year, to see the system in operation, and to determine if other Trusts can learn from our experiences. These supports will be critical in sustaining our project.


Our service has been evaluated in numerous ways. We have presented it to our Directorate’s Clinical Effectiveness Group, Patient Experience Group, and Quality Board, as well as to the Trust Board meeting to ensure it has had high visibility and review. Some of these groups have patient and carer input, which has been helpful in understanding crisis outcome measurements from a user perspective. Overall we have received very helpful comments in refining our work throughout from these interventions.

On the background of our crisis outcomes work, we were visited by NHS England on the launch of their Mental Health Taskforce in 2016, and our work described in this application was highlighted as an example of excellent care in patient experience and clinical effectiveness:

We have also been visited by NHS London, and as noted earlier, the Regional Director for Operations and Delivery in NHS London, Simon Weldon, will be coming to review and evaluate the system in the next month or so.


We are currently expanding our pilot across our Directorate; with the Board and Executive’s approval, we expect full coverage of our programme across adult mental health in Oxleas by 2017.

We are also keen that others should learn from our work: as noted both NHS London and NHS England have been to see our work.

Finally, we have submitted a ‘methods’ paper, describing the technical aspects of what we did, and how we accomplished this, to the peer-review scientific journal BMC Medical Informatics and Decision Making: it is currently under review with them.



Share this page: