What if there was a way for research to answer the questions relevant to patients and clinicians — and save billions of dollars in the process? Investment in clinical trials is critical for human health, but correctable problems in how trials are designed and conducted could be preventing these vital research projects from impacting the most important people in the equation: the patients.
These correctable problems run the gamut from how the trials are designed to the way they are reported. But according to decision scientists like CHÉOS’ Drs. Nick Bansback, Mark Harrison, and Rebecca Metcalfe, the root cause of many of the problems is that a lot of clinical trials don’t adequately address what matters to patients.
Dr. Bansback, Program Head of Decision Sciences at CHÉOS, stresses that this does not mean that clinical trials are not valuable — they provide vital information about treatments and procedures, and are the gold standard for informing clinical practice. But, he said, the issue is that many clinical trials are not patient-oriented or designed to have optimal impact.
“Clinical trials are designed to investigate a treatment for patients, but those patients are not really engaged in the design of the trial,” he said. “This means that trials often aren’t designed to meet the needs of the very people we are trying to help.”
“When we talk about results not being applicable to the problem, we aren’t talking about null results; they are as important as positive results,” he added. “We are talking about results that couldn’t be useful even if the primary outcome is met.”
Although many clinical trials do engage with patients during the planning phase, problems arise when they don’t ask the right questions, or ask them in a way that can be useful, or they don’t properly apply the information gathered to the design and conduct of the trial. Problems can arise before a study is even commissioned.
The consequence is a trial that has to stop early because it fails to enroll enough participants, or results that don’t provide the information necessary to answer the important questions from patients or care providers.
In a blog post for the BC SUPPORT Unit, Drs. Bansback and Harrison likened it to a company doing market research. “When first developing the iPhone, [Apple] needed to know what people wanted (e.g., battery life, screen size, cost) before investing huge amounts of money in development, mass production, advertising, and sales,” they wrote.
By applying decision science methods to make them more relevant to patients, the group aims to optimize clinical trials, greatly increasing their efficiency and the likelihood that they will have a demonstrable impact in the real world.
Nothing without participants
The Scleroderma: Cyclophosphamide or Transplantation (SCOT) trial is an example of a trial that reported issues with enrolling enough participants. The trial was designed to test a type of bone marrow transplantation for the treatment of scleroderma, a rare autoimmune disease which can affect the skin and other organs. The trial initially planned to recruit 226 participants. However, due to low enrollment, the research team had to revise this target to 114 participants, broaden the inclusion criteria, and change the primary outcome of interest. Despite these changes, the trial recruited only 75 patients, greatly reducing the strength and usefulness of its conclusions.
After the results were published, Dr. Harrison and his team looked back at the publications from the study to find out how the enrollment struggles could have been improved. The team talked to people with scleroderma to determine what was important to them in terms of participating in a trial like SCOT.
“We then surveyed a broadly representative group of patients to ask ‘Now that we know all of this information that we think is important, what is the relative importance to you? Which are the most important things that would impact your decision-making about participation?’,” he explained.
They found that, although risks and benefits of the treatment were important, there were also modifiable factors that were important to people, like the distance patients needed to travel to participate, the extent to which multidisciplinary care would be available, and the experience level of physicians in the trial.
“We could have predicted the low enrollment before the trial had even begun — just by talking with the patients who were potential candidates for the study,” said Dr. Harrison.
“By changing some modifiable factors, we found that enrollment could likely be increased from 33 per cent to north of 50 per cent participation for this trial,” he added. “So that’s an example of the kind of boost you could get by taking factors into account that are important to people at the design phase, and that’s without doing anything to the study endpoints.”
Study endpoints: Making results matter
Another challenge for the SCOT trial was that its revised primary outcome was modified to a ‘composite endpoint,’ an issue that affects the interpretability of the results.
Study endpoints are the events or outcomes that are measured to determine whether an intervention has an effect. Changing these endpoints is something that researchers occasionally have to do during a clinical trial in order to be able report results. This could be because they weren’t able to recruit enough people or because the events being measured aren’t happening as often as they were expected to.
Often, changing the endpoints means that different outcomes are collapsed into larger categories or ‘composite endpoints’, explained Dr. Metcalfe, a research methodologist at CHÉOS and a postdoctoral fellow the UBC Faculty of Pharmaceutical Sciences.
“These composite endpoints are frequently used in clinical trials, but are hard to interpret because they treat very different outcomes equally,” she said. “So, in a composite endpoint, you could have something like death, heart attack, and high blood pressure. Those are all very different, but using composite endpoint approaches, if a patient dies, it’s treated statistically the same as if a patient has high blood pressure.”
These broad categories can make it very difficult for people to understand what trial results mean in clinical application. How do you make treatment decisions when you only have information about a drug’s combined risk of death, heart attack, and high blood pressure?
In an upcoming publication, Dr. Metcalfe shows how patient preferences can help us better understand clinical trial results using composite endpoints. Similar to the scleroderma project, Dr. Metcalfe looked back at a previous trial and interviewed patients about the different aspects of a treatment to understand what mattered most to them. Then, the team surveyed a broader group of patients to understand the relative importance of these items.
Dr. Metcalfe found that there were three distinct patient groups that valued different aspects of a potential treatment. If the composite endpoint for the trial was changed to reflect those preferences, the team found that the results would have been completely different for the three groups of patients.
“For one group, there was no difference between the treatments. For another group, the first treatment was better. And for the third group, the second treatment was better,” she explained.
This means that the clinical interpretation of these results depends on what people want in a treatment. “The idea here is to help people understand how their values might interface with actual treatment recommendations,” said Dr. Metcalfe.
Explaining how this type of approach could be used to revisit results of clinical trials, Dr. Metcalfe explained that it can be a powerful way to make the trial more interpretable. “I think this is a useful sensitivity analysis to see how trials could lead to different treatment recommendations for different people,” she said.
Decisions, decisions: How to use this in the real world?
Although looking back at a trial retrospectively can be useful, the best way to apply this approach is at the planning stages of a study.
“I’ve always thought that this could be done similarly to a sample size calculation. We’d never do a trial without plugging in these numbers to know how many people we need to answer the primary question,” explained Dr. Harrison. “This is similar in a way — how feasible is it, given people’s preferences, to hit that sample size? If it’s not feasible, do we have to change something?”
“Of course, this isn’t quite as easy as a conventional sample size calculation; our approaches take three to six months,” added Dr. Bansback. “But committing three months in order to run a successful trial that costs hundreds of thousands of dollars and lasts for four years is a good use of time and resources. One way of doing this is to do it alongside a pilot or feasibility study, which a lot of these big trials require.”
Bringing it back to the analogy about market research, Dr. Bansback notes that almost every field outside of health research is willing to invest a bit of time earlier on in order to develop something that can be successful.
“Once you’ve done this for a few areas, like cardiology or cancer, the results would be related. You wouldn’t be starting from scratch each time and this ‘market research’ phase would be quicker,” he said.
The key, they explain, is to focus on the modifiable factors in a trial — the things that can more easily be changed such as the outcomes that are measured, or the way interventions are offered. If, at the pilot phase, researchers learn that they will struggle to recruit enough participants for a trial or can’t measure the effect of the treatment in a way that will inform clinical practice, they may have to come up with a different study or test a different treatment.
The researchers also stress that these are only two applications of decision sciences to clinical trial design and interpretation, pointing to the various research groups that have championed this work around the world.
“Getting research into clinical practice has always been challenging,” said Dr. Bansback. “We are advocating for a little bit more thought at the beginning of the design phase to make the clinical trial process run more smoothly and produce results that can more easily translate into real-world impact.”
Drs. Bansback, Harrison, and Metcalfe will be discussing their work and how it can be applied to complement clinical trials at an upcoming CHÉOS Work in Progress Seminar.