Leverage Benchmarked Training Metrics to Understand Site Performance Across Investigator Meetings
June 10, 2025 •Array Team

Benchmarking specific training metrics across a series of investigator meetings provides actionable insights for study teams around site preparedness, knowledge gaps, and areas for improvement in future meetings. Further, harnessing benchmarks across therapeutic areas or entire clinical research organizations can identify training best practices. Given all the logistics and planning that goes into investigator meetings, it may seem like there won’t be enough time available to establish systems and conduct necessary analyses for benchmarking. However, using insights to improve a clinical trial’s success is too important to skip—and the process can be seamless if you have the right partner.
The first step is determining which data is important to measure and compare over time. Array’s Analytics and Insights Management (AIM) experts regularly work with study teams during the planning phase to identify the necessary metrics and plan the optimal use of engagement features to gather them.
Elements such as polling, surveys, and digital Q&A enable attendees to interact with speakers and content and share their thoughts. With each of these interactions, Array’s technology captures information that contributes to the dataset needed to make comparisons of both individual and site performance across meetings.
Informed Decision-making
Benchmarking is a strategic process that requires not only compilation of data but also understanding the “why?” for doing so. Investigator meetings are focused on training site staff on key information and tools related to the clinical trial. Therefore, study teams often gather insights from a single meeting to find out how well investigators understand what’s been presented and how prepared they are to carry out their duties. This speaks to the achievement of the meeting’s goal.
Benchmark analysis allows for greater insights that can inform decisions around site follow-up and future meeting design. Comparing specific metrics across study teams or an entire clinical research organization leverages a wealth of previously difficult-to-obtain data points that can pinpoint significant deviations from the benchmark. This can indicate who needs additional training, or where there are low levels of engagement with content and presenters. It can also reveal training content and methods that are particularly successful and worth replicating.
Among the metrics Array most often gathers and compares in this way are site performance, engagement (identifying what individuals and sites were most active), percentage of correct answers to polling, and evaluation scores for elements such as venues, presentations, and speakers.
The following are examples of how benchmark reporting can provide deeper insights into attendee performance in an investigator meeting.
Engagement: When engagement is significantly above the stakeholder’s meeting benchmark, the AIM team will look at attendees’ satisfaction ratings to see if there is a correlation between engagement and overall meeting satisfaction. Taking it a step further, if engagement and satisfaction are both above the benchmark, they look at meeting evaluations for things worth repeating to continue to promote this level of engagement. This could be anything from a location that suited attendees to having a favored speaker return.
Knowledge transfer: As the goal of investigator meetings is to train staff on important topics, understanding both positive and negative deviations from the benchmark can help inform next steps.
Meetings where the correct answer levels are below the benchmark often point to the need to find better ways to engage site staff around complex topics (such as data management). And, while it’s tempting to be satisfied with a percentage of correct answers that is higher than the benchmark, this also requires deeper analysis. Sometimes, this will reveal a disparity such as the fact that attendees struggled with questions around a particularly important topic. In this case, investigators may benefit from additional reinforcement despite a seemingly high percentage of correct answers.
Site performance: This is one of the areas of greatest interest to study teams who request benchmark reporting, as it compares important metrics across sites (rather than individuals). For instance, AIM can provide insights as to how sites compared to the client’s baseline average for percentage of correct polling answers. Digging deeper into data, though, they can also provide a comparison of engagement among sites. If the sites with under-benchmark performance on knowledge are also above for engagement, this means they are saving slides, taking notes, and asking questions. These are all signs they are addressing their knowledge gap and taking steps toward retention. In this case, sponsors would not need to concern themselves with the lower knowledge benchmark.
The greater concern would be sites who are under benchmark for both knowledge and engagement. They have knowledge gaps and are not actively participating in the training to improve that. This indicates a problem, and sponsors should follow up with these sites first.
Series: When a series of meetings is held over time and/or in multiple regions, it’s helpful to compare metrics such as engagement, knowledge, and satisfaction both to the benchmark and each other. This provides an opportunity for teams to adjust agenda times to focus more on topics that are challenging, as well as brief speakers as to where there were under-benchmark knowledge scores so they can emphasize the content in those areas.
Site feedback: The general evaluations completed by attendees at the end of the investigator meeting allow for numerical rankings of many individual elements of the meeting. This provides the opportunity to learn how everything from sessions to logistics compared to the benchmark.
For session feedback, it’s ideal to see a high-ranked overall session evaluation. This indicates attendees will apply what they learned on key aspects of the study and are confident and prepared to enroll and retain participants. Scores that are higher than the benchmark also indicate the site staff felt it was a good use of their time and training, rather than contributing to site burden.
Additionally, rankings for individual sessions can inform future training. Sessions that fall lower than the benchmark for satisfaction may shine a light on an area where the presentation could benefit from engagement tools or be made more succinct in the next iteration. As noted earlier, there are often correlations between satisfaction and engagement. If the lowest-rated sessions also had low engagement, this is a weak point that should be addressed for future meetings.
To create a foundation for benchmarking elements of the meeting, be sure to ask for ratings of the most important features or logistics. Ask specific questions about content or tools presented during the session, such as a reference document for the eCOA or a Site Action Plan, to determine if they found them valuable. If those who “agree” fall below the benchmark, consider reviewing those materials to see if improvements can be made. Similarly, look for instances where evaluation responses are higher than the benchmark for logistics such as the venue, location, experience, or onsite staff. These are things worth replicating in the future.
Actionable insights for trial impact
Gathering insights from individual meetings enables you to understand the attendee’s journey throughout the meeting – and whether they ended up where you needed them to in the end. Taking that information and benchmarking across all meetings can unlock insights across site teams and therapeutic area silos in the organization. This enables you to look at meeting performance holistically, learn colleagues’ best practices, and make informed decisions to improve programming to meet your goals in the future. What’s more, identifying and addressing performance gaps discovered in this way can help improve recruitment and retention as well as the efficiency of a clinical trial.