The Impact Of Remote Home Monitoring Of People With Covid-19 Using Pulse Oximetry: A National Population And Observational Study

Abstract

Background

Remote home monitoring of people testing positive for COVID-19 using pulse oximetry was implemented across England during the Winter of 2020/21 to identify falling blood oxygen saturation levels at an early stage. This was hypothesised to enable earlier hospital admission, reduce the need for intensive care and improve survival. This study is an evaluation of the clinical effectiveness of the pre-hospital monitoring programme, COVID oximetry @home (CO@h).

Methods

The setting was all Clinical Commissioning Group (CCG) areas in England where there were complete data on the number of people enrolled onto the programme between 2nd November 2020 and 21st February 2021. We analysed relationships at a geographical area level between the extent to which people aged 65 or over were enrolled onto the programme and outcomes over the period between November 2020 to February 2021.

Findings

For every 10% increase in coverage of the programme, mortality was reduced by 2% (95% confidence interval:4% reduction to 1% increase), admissions increased by 3% (-1% to 7%), in-hospital mortality fell by 3% (-8% to 3%) and lengths of stay increased by 1·8% (-1·2% to 4·9%). None of these results are statistically significant, although the confidence interval indicates that any adverse effect on mortality would be small, but a mortality reduction of up to 4% may have resulted from the programme.

Interpretation

There are several possible explanations for our findings. One is that CO@h did not have the hypothesised impact. Another is that the low rates of enrolment and incomplete data in many areas reduced the chances of detecting any impact that may have existed. Also, CO@h has been implemented in many different ways across the country and these may have had varying levels of effect.

Funding

This is independent research funded by the National Institute for Health Research, Health Services & Delivery Research programme (RSET Project no. 16/138/17; BRACE Project no. 16/138/31) and NHSEI. NJF is an NIHR Senior Investigator.

Research in Context

Evidence before this study

Existing evidence before this study and the search strategy used to obtain this evidence has been published previously by the authors in a systematic review. We searched MEDLINE, CINAHL PLUS, EMBASE, TRIP, medRxiv and Web of Science for articles and preprints from January 2020 to February 2021. Papers were selected if they focussed on the monitoring of confirmed or suspected patients with COVID-19. The search algorithm used combinations of the following terms: “COVID-19″, “severe acute respiratory syndrome coronavirus”, “2019-nCoV”, “SARS-CoV-2″, “Wuhan”, “coronavirus”, “virtual ward”, “remote monitoring”, “virtual monitoring”, “home monitoring”, “community monitoring”, “early monitoring,” “remote patient monitoring”, “pre-hospital monitoring”, “Covidom”, “My m health”, “GetWell Loop” “silent hypoxaemia” “pulse oximetry”. Previous quantitative studies have assessed remote oximetry monitoring services for COVID-19 patients mostly at individual sites and focussed on their safety. However, their effectiveness has been little studied. This may reflect the challenges of identifying reliable counterfactuals during a rapidly evolving pandemic.

Added value of this study

This study is part of a wider mixed methods evaluation that followed the rapid implementation of remote monitoring across the English NHS during the Winter of 2020/21. Previous studies have evaluated remote monitoring of COVID-19 patients using oximetry at a local level, some being targeted at people of particularly high risk. This study adds evidence towards the performance of such programmes at a national level.

Implications of all the available evidence

There is some existing evidence that remote monitoring of COVID-19 patients can be locally effective although we have not been able to replicate such findings at a wider level. Missing data and lower coverage of the service than expected may have influenced our results, and the effectiveness of some local programmes could have been lost amongst the analysis of national data. Future implementation requires better data collection strategies which could be focussed within fewer local areas, and effective learning from areas that have achieved better population coverage.

Introduction

During the early months of the COVID-19 pandemic, many patients with COVID-19 were admitted to hospital having deteriorated several days after they were first diagnosed. Many of these patients had “silent hypoxia” (low blood oxygen saturation levels without typical symptoms or awareness) and, once at hospital, often required intensive treatment with a high risk of mortality. This motivated health services to try and detect such cases at an earlier stage by monitoring blood oxygen levels in people diagnosed with COVID-19 at home using pulse oximetry. This could reassure people who did not need to go to hospital, whilst more quickly identifying individuals with dangerously low blood oxygen saturations (<92%).

In the English National Health Service (NHS), remote home monitoring using pulse oximetry started to be implemented within some areas during the first wave of the pandemic in the UK. This was followed by a national implementation during the Winter of 2020/21.5 The service was known as COVID Oximetry @home (CO@h) and by the end of January 2021 it was operating in all clinical commissioning areas of England.

The way different areas organised and operated the service varied. People testing positive for COVID-19 would be sent a pulse oximeter for use at home and readings would be sent to local healthcare staff. The process of reporting readings was sometimes facilitated by smartphone technology or reported via telephone, depending on the location and the preferences of the patient. Some sites started by only enrolling individuals aged 65 or over, or who were deemed extremely clinically vulnerable. Others extended enrolment to a wider age group, and often these criteria changed over time.

One aim of CO@h was to reduce mortality through earlier identification of deterioration. Furthermore, it was hypothesised that if fewer COVID-19 patients were admitted to hospital with advanced disease, and critically low oxygen levels, there may be a reduction in the use of critical care facilities, fewer deaths within hospital and shorter lengths of stay. The anticipated impact on numbers of hospital admissions was less certain since the aim of the programme was not to reduce admissions, but to make sure people who needed to be in hospital were admitted sooner. However, any consequence on the number and mix of patients admitted for COVID-19 would be useful to understand as remote monitoring may have different impacts on different types of individual.

Earlier studies of the use of oximetry for remote monitoring within England during the country’s first wave focussed on aspects of safety and implementation, but were unable to establish reliable comparators for measuring impact.

Faced with this lack of evidence as to the likely effectiveness of CO@h, the two rapid evaluation teams commissioned by the National Institute for Health Research (NIHR) were requested by NHS England and NHS Improvement to undertake a mixed methods study of the service. This study included evaluations of clinical effectiveness, costs, the processes of implementation and patient and staff experiences, and was one of three evaluations simultaneously requested by NHS England/Improvement.

This paper presents findings from the clinical effectiveness workstream of the study addressing the specific research questions:

  • 1.What is the impact of CO@h on mortality?
  • 2.What is the impact of CO@h on the incidence of hospital admission for COVID-19 or suspected COVID-19 and on the characteristics of those admitted?
  • 3.For these admissions, what is the impact on in-hospital mortality and length of stay?

 

Our quantitative approach used combinations of unlinked, aggregated population-level data and hospital administrative data. In doing so we were able to undertake a rapid analysis that not only complemented the other evaluations but provided valuable insight in the future evaluation of similar programmes implemented at scale.

Finger Tip Pulse Oximeter

Methods

Study design

The study of overall mortality and admissions was designed as an area-level analysis combining aggregated data from different sources. Considering these data as time series, we investigated “dose-response” relationships between the evolving coverage of the programme within each area and outcome. We analysed four outcomes: mortality from COVID-19, hospital admissions for people with confirmed or suspected COVID-19, in-hospital mortality for these admissions and their lengths of stay. For the in-hospital outcomes, we used an observational design relating in-hospital mortality and lengths of stay at an individual patient level to the degree of coverage of the CO@h programme within the area at the time of admission.

Setting and participants

The setting was all Clinical Commissioning Group (CCG) areas in England where there was complete data on the number of people enrolled onto the programme (onboarded) between 2nd November 2020 and 21st February 2021. (CCGs are NHS organisations that organise the delivery of primary care services within a specific geographic area. At the time of the study there were 135 in England.) The study populations included anyone with a laboratory-confirmed positive test for COVID-19 and any hospital admission for COVID-19 or suspected COVID-19. We also limited the analysis to people aged 65 or over, as this population was eligible for CO@h across all CCGs and both coverage and frequency of outcomes within this group were higher. Implementation amongst younger age groups across the country was much more variable.

Data and variables

For our analysis we used data from several sources (see supplementary material, Table S1). Data on numbers of new cases of COVID-19 and deaths were acquired from Public Health England (now the UK Health Security Agency). New cases were laboratory-confirmed and deaths were those either within 60 days of the first positive test or where COVID-19 was mentioned on the death certificate. If someone had more than one positive test within the previous seven days, then only one was counted. These data were aggregated by week, age band and CCG. The selected age bands were 65 to 79 and 80 plus. Numbers of people onboarded to CO@h were sourced from a bespoke national data collection for the programme and aggregated by the team at Imperial College London undertaking one of the other two simultaneous evaluations. Due to small numbers, aggregation was performed by fortnight, rather than week, and by the same age bands and by CCG. To comply with data protection rules, these data were also rounded to the nearest five individuals, or, for smaller values, labelled as between one and seven.

Data on hospital admissions and outcomes were obtained from Hospital Episode Statistics (HES). Although most of the non-hospital data was available weekly, we aggregated to fortnightly data in order to match the aggregation of the onboarding data. We restricted our statistical analysis to the period between 2 November 2020 and 21 February 2021 when numbers of cases and outcomes were at their peak. Also, outside that period there were too many low numbers at our chosen level of granularity.

Coverage of CO@h was measured as numbers enrolled onto the programme within each CCG every fortnight divided by the number of new cases detected in that fortnight. To be able to calculate this by CCG, we required the onboarding data within a CCG to be complete. CCGs providing complete onboarding data were identified by NHS Digital. As part of the wider mixed methods study, the team selected 28 study sites for surveys, interviews and to obtain data on costs, most of which were CCGs that provided complete data. For the costing part, sites were independently asked how many people they had onboarded, and we used this information to validate the reports of completeness from the national programme and to include additional CCGs where the numbers onboarded were broadly similar or greater. Further information about this process is included in Section 2 of the supplementary material. Where numbers onboarded were between one and seven, we assigned a value of four, being the mid-point within the range.

We estimated coverage in two ways. One was to calculate it for each CCG regardless of whether a service was operating at the time, and this was used in our analysis. However, to understand what coverage was achievable once a service was implemented, we also estimated coverage within individual CCGs over periods when we knew a service was operating there. For this we only included fortnights over which a service was operating within the CCG for the entirety.

The proportion of hospital beds occupied by COVID-19 patients was used as a measure of local system pressures and sourced from publicly available routine data. By the end of February 2021, most hospital trusts were operating step-down virtual wards whereby COVID-19 patients could be discharged early with a pulse oximeter and monitored at home in a similar way to the CO@h service.Due to the potential influence of these virtual wards on hospital outcomes their existence was incorporated as a confounding variable in our analyses of length of stay and in-hospital mortality.

Comparisons between included and excluded CCGs

We compared population characteristics and COVID-19 incidence rates between the CCGs we included, because their data was believed to be complete, and the remaining CCGs to test how representative the included CCGs were. The mean values and proportions associated with each CCG were treated as the separate observations. Normality was assessed by viewing Q-Q plots of the variables and comparisons were carried out using Student t-test, or Mann-Whitney U-tests where data were skewed. We also investigated their geographical spread.

Analysis of mortality

Because we only had aggregate data for deaths, new COVID cases and people onboarded to CO@H, our approach was to calculate coverage rates for CO@H over time and then investigate relationships between levels of coverage and mortality by age band within each CCG. To do this we adopted a two-stage approach. The first stage was to estimate denominators representing exposure, the second was to use these as offset variables in negative binomial regression models, relating mortality to coverage of the CO@H programme by age group. We included a further variable for the month to allow for changes in relationships as the second wave progressed. To account for CCG-level effects we used general estimating equation (GEE) approaches with an exchangeable correlation structure.This approach accommodates the fact that mortality within a single CCG is likely to be correlated and GEEs ensure that correlation is accounted for by adjusting parameter estimates and standard errors.

The need to estimate denominators arose because we were not able to directly link the new cases and mortality data. When a death occurs, the median time between a new case arising and death is about two weeks, although some may have been diagnosed only in the previous week, and some three weeks or more before. We therefore developed a preliminary set of regression models relating mortality to new cases, with new cases lagged at different times, in order to establish the contributions of the lagged variables. These then determined weights which we used to aggregate new cases into a denominator. Assuming that there was no lag between diagnosis and exposure to the programme, we applied the same weights to the onboarding data to establish a weighted coverage variable appropriate to the mortality observed at each time. A more detailed description of this approach is provided in Section 3 of the supplementary material.

Other options for lagging the time between diagnosis, onboarding and mortality were tested in sensitivity analysis and reported in the supplementary material.

Analysis of hospital admissions

Hospital admissions over the study period were extracted from Hospital Episode Statistics (HES). We considered any admission where COVID-19 or suspected COVID-19 appeared as a diagnosis in the first episode of care, whether as a primary or secondary diagnosis (ICD-10 codes U07.1 and U07.2). If a patient was readmitted with one of these diagnoses within a 28-day period, we only considered the first admission. To match the onboarding data, numbers were aggregated by age band and fortnight.

We undertook a similar procedure for hospital admissions as for mortality, although with different weights, since the time between diagnosis and admission tended to be shorter.

Again, for our sensitivity analysis, we tested different options for lagging the time between diagnosis, onboarding and outcomes. We also tested the option of only including admissions where COVID-19 or suspected COVID-19 was the primary diagnosis.

Separate models were developed to evaluate any impact of CO@h on the characteristics of patients admitted in terms of age, sex, deprivation, Charlson Score (a measure of the severity of co-morbidities) and ethnicity. Our dependant variables for these characteristics were mean age of admissions by CCG, numbers of female admissions, numbers living in the most deprived quintile, defined by the Index of Multiple Deprivation (IMD), numbers with Charlson scores greater than five and numbers reported with non-white ethnicity. For age, we performed ordinary linear regression relating the mean age to coverage and month accounting for CCG-level effects using GEE approaches, as before. For the other characteristics we use Poisson regression to relate each dependant variable to coverage, age band and month and accounting for CCG-level effects in a similar way. For the Poisson regression models the natural logarithm of the number of admissions was used as an offset variable.

Analysis of in-hospital outcomes

To analyse outcomes for COVID-19 patients admitted to hospital, we used individual-level Hospital Episode Statistics (HES). To measure in-hospital mortality we included any death that was reported within the same hospital spell. To investigate the impact on in-hospital mortality, we created logistic regression models relating mortality to the weighted coverage for the relevant CCG with individual patient characteristics as confounders. Values for the weighted coverage corresponded to those calculated for hospital admissions. Again, we used general estimating equation (GEE) approaches to account for CCG-level effects. Length of stay was defined as the number of days between admission and discharge from the same hospital or death within that hospital. We used negative binomial regression models to analyse the impact on lengths of stay of the weighted coverage for the relevant CCG, again with individual patient characteristics as confounders. Stays longer than 60 days were trimmed to 60 days to mitigate the influence of very long stays. Because we used negative binomial models, ratios in outcomes led to our deriving the impact on length of stay as a percentage change rather than a number of days.

Using rounded data

To accommodate the uncertainty caused by the rounding of the onboarding data, we ran all our statistical models multiple times, each time randomly sampling onboarded numbers from the range of feasible values (treating the distributions as uniform). Based on the similarity of results with each simulation, we deemed it sufficient to perform 1000 runs for each model. The simulation results were then pooled to obtain overall effect sizes. All statistical analyses were performed using SAS version 9.4.Patient and public involvement

Members of the study team met to discuss the study with service users and public members of the NIHR BRACE Health and Care Panel and patient representatives from NIHR RSET. Although mostly used for the qualitative evaluations in the wider study, meetings were held during the data analysis phase to share learning and cross-check our interpretations of findings.

Data governance and ethics

The receipt of aggregated data from Public Health England was governed by a data sharing agreement. Receipt of aggregated onboarding data from Imperial College was governed by their separate data sharing agreement with NHS Digital. The access and use of HES was governed by an existing data sharing agreement with NHS Digital covering NIHR RSET analysis (DARS-NIC-194,629-S4F9X). Since we were using combinations of aggregated data and datasets for which we already had approval to use, no ethics committee approval was needed for this analysis. No patient consent was required for this study.

Finger Tip Pulse Oximeter

Please click on the link below to read more.

 2022 Mar 1 : 101318.

doi: 10.1016/j.eclinm.2022.101318

Related Posts

Young man dozing with head on hand while sitting at desk with laptop in office. Businessman sleeping at workplace in morning after weekend party day before. Tired male entrepreneur slumbers at work
New Estimates of Problematic Opioid Use in Ireland
1
Overview of Common Psychoactive Substances
429648556_413795227893512_8359817198707102591_n
2022 Drug Treatment Demand in Ireland
littmann
Discover Revolutionary Stethoscopes in 2024
Scroll to Top