Emilia Chiscop-Head, PhD: You manage two key programs at the School of Medicine. The Clinical Quality Management Program (CQMP) and Research Quality Management Program (RQMP). Could you explain why these programs are important?
Kristi Prather: Both programs, the CQMP and RQMP have the goal of keeping our research in a proactive state. The Clinical Quality Management Program defines the minimum standards for internal quality management reviews of our clinical research studies in the Schools of Medicine and Nursing. The intent of the policy is to outline the various aspects of the workflow, and to ensure consistent implementation of routine monitoring during the conduct of research studies. The central office supports the program by providing the quality management (QM) reviewers training and tools to facilitate the reviews, and by overseeing implementation of the policy. Then it’s the QM reviewers who use their subject matter expertise in the type of research being conducted to pick up on discrepancies and help study teams to get those corrected, with the support of the Clinical Research Unit (CRU) leadership.
The Research Quality Management Program is made up of what we call research quality teams within each department, center, and institute. They manage their research quality and risk by establishing a culture that promotes dialogue and shared accountability, as well as developing clear and effective processes and communication pathways so that someone can mitigate and intervene early if there is a potential problem.
"It is human instinct to blame individuals for errors, when often it is the system that failed in some way."
Could you share an example about a situation when these programs proactively solved a problem?
Kristi Prather: I can give a pretty easy example from CQMP. A QM reviewer evaluated a study in the early enrollment phase and found that the study had not been set up with adequate source documentation methods. The study did not have a dedicated clinical research coordinator to assist the PI with protocol implementation. So, as a result of this early review, she was able to intervene, work with the CRU leadership and help the PI develop the resources they needed to be successful.
The reality is people are very busy. There are many moving pieces during the research lifecycle. Having consistent processes that keep research teams in a proactive state is very important, even for smaller research studies. Yes, establishing quality management processes involves costs and resources; but think about the time and resources required to fix a problem of that nature, if it had not been caught early.
Many times, study team members have their own way of doing things and do not realize that there are resources, templates and tools available to them as part of this research community; and that there are evolving best practices that they may not be familiar with. Quality management can provide that insight. And thus, in the long run, an upfront commitment to quality saves time and resources.
Which tools are most widely used and why?
Kristi Prather: For CQMP, we provide a monitoring plan template and sample language, as well as regulatory and participant chart review tools which literally walk you through what things to look at in terms of compliance with guidance and policy. We also provide a central database that facilitates reporting of the review findings. All of these tools are meant to ensure consistency and keep CRUs from having to develop individual processes themselves, so that they can focus on the research.
The most popular service that we provide is our ongoing support and education. In the central office, we are dedicated to forming collaborative relationships by disseminating information and, most importantly, by being accessible to answer day to day questions. I think, throughout our office, we truly aim to provide excellent customer service. Our partners reach out to us very regularly via email. We also offer regular ongoing education sessions and round tables. We are very high touch and work hard to build a quality management community comfortable with ongoing dialogue. We focus our attention towards a culture of continuous process improvement and data-driven decision-making.
Our ultimate goal is not only to support the quality of an individual study, but to leverage the information across studies and help research leaders support their researchers and build processes that will help prevent problems in the future.
Among these supportive processes, you are providing training and support with Root Cause Analysis (RCA), and Corrective and Preventive Action (CAPA) planning. What are the main objectives of these methods and some of the lessons that you learned by providing them.
Kristi Prather: I am a big advocate of proactive planning to ensure quality and prevent bad things from happening in the first place. But when unexpected things do happen, understanding the root causes that led to that problem is key. It is human instinct to blame individuals for errors, when often it is the system that failed in some way. It could be an unclear process, inadequate or lack of training, an outdated web resource application, or inadequate management or resource allocation. It can be other technology failures. It is important for us to build a culture where we don't think of root cause analysis and CAPA planning as punitive, or as an indicator of failure. In the private sector, this is business as usual: you identify an issue and then you take proactive measures to correct it in some way and try to prevent it in the future - a methodology for continuous improvement. It’s only a big deal if we make it one.
What are some of the challenges or barriers?
Kristi Prather: Some people fear that they may discover problems that are not within their power to address. It is the overwhelming assumption that if a root cause is identified, there is an obligation to fix it. I would say it is better to try to understand a problem and why it occurred, even if an immediate solution isn’t feasible. Sometimes it is a matter of monitoring the issue over time to assess likelihood, impact or severity, and ability to detect it early if it reoccurs. This is the spirit of quality management: small, incremental improvements that over time pay off.
"I would encourage research unit leaders to partner with our office at any time"
When should a research unit initiate a RCA and CAPA process?
Kristi Prather: If a department, center, or institute does not have one already, I would encourage research unit leaders to partner with our office at any time to develop an internal process to manage the workflow. That way when a research team needs help, there’s already an established support system in place. We can share standard operating procedures that have worked well for other units, which includes guidance on when an RCA and CAPA plan should be initiated. We can also help units develop tools for consistent documentation, and provide training and coaching on the methodology. While I sometimes assist groups with conducting a root cause analysis, the most effective approach is to have someone within the research unit facilitate the process because they understand the internal expectations and processes, and therefore will better understand where the processes failed.
Individual research teams are also welcome to reach out to our office directly if their research unit does not already have a standard process in place. I’d also like to mention that oftentimes a review only happens is when it’s part of a required response to an audit or problem identified by someone outside of the study team. But we do not need to wait until someone else identifies a significant issue. The more people address observed or potential problems early, and identify their root causes, the more they will appreciate the benefits of the process and embrace it as part their everyday culture.
Could you give us some examples of concerns that departments can bring to you for successful root cause analysis and CAPA planning?
Kristi Prather: Some of the issues that come to my mind are related to reoccurring protocol deviations. We worked with a CRU where their initial assumption was that an individual person on that study team was being sloppy or inadequately trained. But after some digging, we learned that there had been some miscommunication in terms of what was being touted as optional procedures if time was limited, even though it was all considered required in the protocol. Often problems arise due to poor communication and lack of appropriate oversight. Is the protocol and study workflows clear? Is the study feasible given the allocated resources? Do people have the support from their leadership and management team? Is there enough PI involvement? The RCA provides an opportunity to really figure out where along the process the errors got introduced and most importantly why!
If you see something significant that has the potential to impact participant safety, data integrity or overall study integrity, or even a minor issue if it happens over and over again, I’d encourage you to reach out to give this methodology a try. It might only take us an hour or so to sit down, do a straight-forward analysis of the problem, and put actions into place to address it.
Do you think research quality management could also be helpful for non-biomedical research?
Kristi Prather: I'm a big proponent of quality management and I support the idea of shifting our mindset from fixing errors to creating communication pathways, promoting open dialogue, utilizing the resources that already exist, and identifying risks before something bad happens. That type of thinking is helpful regardless of the type of work being done.
Do these initiatives exist at other top universities, or is only Duke implementing them and leading the way?
Kristi Prather: These programs show Duke's commitment to establishing a culture that promotes open dialogue and shared accountability. These programs are just two of the many initiatives demonstrating that commitment. We’ve definitely received calls and emails from other institutions reaching out to learn from our model as this is an area for continued growth in the research community. The Duke Office of Scientific Integrity has helped build and promote an infrastructure for research quality and integrity, but the stakeholders who are embracing it are the ones who make the magic happen.
Research quality has to be a grassroots movement. It needs to include every investigator, trainee, and staff member, because at the end of the day it's the individuals conducting the research who are driving the quality.
And, I would like to note that you are someone who understands well the work involved in clinical research as well as the problems that can occur. You have been involved in clinical research for almost two decades, including twelve years at the Duke Clinical Research Institute. Why did you decide to change gears?
Kristi Prather: After my undergraduate studies, I started doing site-based research in psychiatry and addiction as a research coordinator. Over the years I decided that I wanted to be more involved in the analysis and draw conclusions from the research data. I earned a Master in Public Health degree with a concentration in biostatistics in 2008. This provided me with the opportunity to still be involved in research all the way from the design phase through data analysis and publication, which was a joy. I was fortunate to work with some amazing study teams, biostatisticians and programmers from whom I learned a lot. While that direct involvement in the science was very fulfilling, I realized that there was a way to impact research on a broader scale by supporting quality management. I love being in the scientific integrity space. We are all very passionate about supporting research and I encourage research unit leadership and individual researchers to contact us if there’s anything we can do to support the wide variety of amazing research being conducted at Duke each and every day.