Posted on: 4 October 2023

As we move into Autumn, this edition of the quarterly CNWL Improvement Academy News brings you some thoughts on how Planned Experimentation might be a way to improve and learn as fast as possible, as well as a round up of the latest news and training opportunities.

  • What can planned experimentation do for you?
  • IA Improvement Badges
  • Celebrating Success
    • CNWL’s Milton Keynes Community Paediatrics Service shortlisted for HSJ Patient Safety Award
    • CNWL celebrates Expert by Experience Involvement in Improvement work
    • CRHTT Milton Keynes present their Co-Production practices at an NHS England Event
  • QI in the news
    • Improving Subject Access Request (SAR) compliance at CNWL
    • Quality Improvement Collaborative 2022-23 – Improving Physical Health Monitoring in KCW
    • Level 2 QI Training on How to run a QI Project - Peter Smith tells us what is new in cohort 4 and why this training might be for you
  • Welcome to Dr Mahmudur Khan, Medical Education & QI Fellow
  • Safety Conversation
  • Expert by Experience Forum
  • Upcoming training opportunities

Planned Experimentation – what can it do for us?

The Model for Improvement may be familiar to us and we often think about and practice the sequential approach to test the effectiveness of a set of change ideas.  We may use an effort/impact matrix to think about which of our change ideas to test first and then use ramps of Plan Do Study Act (PDSA) cycles to test each change idea in turn.  We often test multiple changes at the same time.

After we have tested a set of change ideas, we may be seeing the effects on our data and it can often be the case that our process measures start to improve before we see the positive affect on our outcome data.  At this point in the progress of a QI Project, we may ask ourselves ’which change or combination of changes has brought about this improvement?’.  It is usually not possible to draw absolute conclusions about which change in our system has had the most effect.  This is where planned experimentation might offer us a way of understanding the nature of the relationship between our changes and the greatest effect on our system performance data.

In essence, a planned experimentation is a deliberate pattern of turning on and off a set of factors (our change ideas) in different combinations and analysis of what overall effect each combination has on response variables (our outcome data).

It is useful to define what we mean by these terms:

Factors are the variables that we are interested in finding out how much effect they will have on our improvement; they are the factors that cause the outcome of our experiment.  It can be useful to set out each component as if we are thinking about a Fishbone or Ishikawa Diagram:

Each Factor can have different Levels; in its simplest form, those levels can be just two levels: ‘on’ and ‘off’, but of course there can be more levels.

Our response variables are the outcome of all the factors and background variables; it is the measure that we are trying to improve.

Continuing the thought of the system as a fishbone diagram, the background variables are all the ‘causes’ of the outcome that we know about, but do not wish to study their effects in our planned experiment.  Background variables are those that we wish to keep constant as we vary our Factor levels.

Once we have identified the background and response variables, factors and how many levels are needed for each factor, we can plan what is called a design matrix, which sets out how many different tests are needed to cover all of the combinations of factors and levels.  The more factors and levels the more complex an experiment will be.  If we have three Factors, each with two levels, we would have 23 tests (2x2x2) or 8 different states to test.

One of the difficulties in doing this kind of testing is that turning change ideas on and off can be difficult, for instance if one of your factors is to train staff, you would not be able to ‘turn off’ or ‘un-train’ staff once training is delivered.  You would need to order your testing so the training is activated last.

In CNWL, we have yet to try planned experimentation on any scale, so to illustrate what we mean, let us consider a scenario where we wanted to improve assessment process times and we can use some ‘dummy data’ to see how a planned experiment might work?

An example: A planned experiment to reduce waiting times in Children and Adolescent Mental Health Services (CAMHS)

Response variable = Total time taken to complete a patient assessment

Factors (change ideas to be tested):

  1. Using a new pre-assessment form
  2. Standardised assessment template
  3. Digital dictation software for clinicians to write up notes

Levels: For each factor above, either use or do not use (current practice)

If we then went and ran enough tests to work out the average total assessment time for a design matrix, we would get a set of data thus:

Fortunately, software is available to allow us to analyse the outcomes and graphically represent it to be able to decide which is the best or most effective factor or set of factors.  Here are two such plots:

 

  

We would conclude from these plots that the most effective combination in bringing down assessment times is the digital dictation alone; in the dot diagram we see D for the dictation being on the furthest left (largest reduction in assessment time) and for the cube diagram, the Dictation turned on with the forms both turned off has the least value for average assessment time (the bottom front corner of the cube).

Planned Experimentation is an emerging field and if this article has fired up your curiosity to explore using this approach, please speak to your Divisional Improvement Advisor (contact details at the end of this newsletter).

Further reading:

An example of an academic paper based on using Planned experimentation approaches, see this article, which discusses work to improve the management of acute kidney injury in primary care: Click here.

Reference:

Quality Improvement through Planned Experimentation, Third Edition (2012, Moen, Nolan, Provost)  ISBN 978-0-07-175966-3

Improvement Academy Email Badges

Are you proud of your QI training?  At the Improvement Academy, we want everyone in CNWL to be proud of their knowledge about how to methodically improve our services, so we have developed a set of Improvement badges relevant to each level of training.

We invite everyone who is trained by the Improvement Academy to place the relevant badge on their email signature.  So, this is for staff, service users and carers that have concluded any level of training!

Are you an Improvement Sponsor, an Improvement Coach or have you completed Level 1 Bitesize QI Training?

Here is the full range of badges:

To get your badge, please email the central QI Team at cnw-tr.improvementsupport@nhs.net letting us know which badge is the one for you and we shall send you the file to add to your email signature.

Celebrating Success

CNWL’s Milton Keynes Community Paediatrics Service shortlisted for HSJ Patient Safety Award

CNWL’s Community Paediatrics Service in Milton Keynes has been shortlisted for an HSJ Patient Safety Award in the categories of ‘Quality Improvement Initiative of the Year’ and ‘Improving Care for Children and Young People Initiative of the Year’.

Erika Lamb, Community Paediatrics Manager, said: “This project has involved a lot of time and effort for all the staff involved and has been tackled with real enthusiasm, always with the aim of improving the service provided to the young people and their families in Milton Keynes.

“To receive the wonderful news that we have been short-listed for an HSJ Patient Safety Award is so exciting and a real reward for the whole team, whether we win or not, for all the hard work involved.”

Read the full story here.

CNWL celebrates Expert by Experience Involvement in Improvement work

The inaugural Expert by Experience (EbE) celebration event, organised by the CNWL Improvement Academy and Involvement team, in collaboration with EbE colleagues, took place on Thursday, 8 June 2023 in the Bevans at Trust Headquarters. The theme of the event was 'Collaborating for Success: A Celebration of EbE Involvement in CNWL Improvement Work'. Fifty-seven individuals attended this event, including service users, carers, CNWL staff and a representative from NHS England and NHS Improvement (NHSEI).

The event commenced with the opening remarks delivered by Janet Seale and Sandra Jayacodi, setting the tone for the celebration. They highlighted that the event was aimed at celebrating the outstanding efforts of CNWL staff, service users and carers, who have worked together to improve the Trust’s services. Following the introduction, the Chief Medical Officer, Dr Con Kelly, took the stage to share insights from his own improvement journey. He is a big advocate of EbE involvement in improvement work and made sure to stress how crucial it is.

During the event, two improvement teams—the Crisis Resolution and Home Treatment Team and the Information Governance team—presented their improvement work, shining a spotlight on the invaluable contributions made by their respective EbE colleagues, Paul Jones and Sandra Jayacodi. The subsequent Q&A sessions that followed each presentation sparked interesting discussions and showed overwhelming support for EbE involvement in improvement work.

Sandra, then facilitated a panel discussion featuring three panel members. Their discussion revolved around ways of ensuring meaningful engagement of EbE colleagues in improvement work. One key takeaway? EbE involvement is like the secret ingredient in a delicious meal – absolutely essential.

To wrap things up, Trust Chair, Tom Kibasi gave his closing remarks, reminiscing about past experiences that strengthened his resolve to be a strong advocate for the involvement of EbE colleagues in improvement work. As a token of appreciation, awards were presented to twenty-two EbE colleagues, recognising their invaluable contributions towards enhancing the Trust's services.