Array’s Site-level Reporting Identifies Knowledge Gaps
April 4, 2025 •Array Team

In a recent article, “Optimize investigator training to reduce site burden,” we focused on using insights gathered prior to live training to make investigator meetings more meaningful to attendees. We noted how honing training to fit the audience’s needs and initial understanding improves the potential for learning and retention. In this article, we’ll look at how analyzing site staff engagement, confidence and knowledge from meeting data at the site level can immediately identify risks to trials. This enables stakeholders to take the most efficient steps to correct the misinformation for more expedited study start-up.
Understanding site engagement
Array works with study teams from the planning stage to determine how our engagement features and the analytics they provide can help achieve their goals for the meeting. We then measure engagement by interactions with the Array platform—questions asked, responses to polls and surveys, and slide annotation and saves. There are very few cases where simply knowing there was ‘high’ engagement in a life science meeting tells a complete and accurate story of the meeting’s success. Array’s Analytics and Insights Management (AIM) team helps bring greater meaning to engagement metrics by looking at them in the context of the goals so clients can see where challenges lie. Without applying this greater context, sometimes engagement alone can be misleading.
For example, the primary goal of all investigator meetings is training (learning and knowledge retention). The AIM team might look at the meeting data from a site level perspective and identify sites that demonstrate a below-average initial understanding of the topic. Suppose those sites also took notes, saved slides and asked clarifying questions. It’s tempting to look only at the low initial understanding and see those sites as lacking the knowledge necessary to move forward. However, context puts their higher level of engagement in perspective; in this case, it’s important to realize the correct answer is immediately provided as poll results are shown. The sites’ high level of engagement indicates they cared enough about the training to be active participants in their own learning, take steps to clarify their understanding (through questions and notes) and work toward retention by saving materials they could refer to later. That higher level of engagement indicates the key difference between misinformation and knowledge retention.
On the other hand, the sites identified as having lower levels of engagement and fewer correct answers likely need additional training. They were neither accurate in their answers nor taking steps to make sure they improved their knowledge.
Another valuable context for engagement is role or responsibility (study coordinator, principal investigator, site recruitment, etc.). There is a vastly different potential impact on a trial if a study coordinator is misinformed or if it is the principal investigator responsible for the scientific integrity who demonstrates low levels of understanding. Knowing who your most active personas are can indicate who within sites are committed to developing a mastery of the information and are most likely to share knowledge with their teams. If that most engaged role, however, also correlates with lower levels of correct answers or confidence, that requires strategic follow-up to prevent the spread of misinformation. If the misinformation pertains to patient recruitment and retention, such as incorrectly selecting reasons for inclusion and exclusion after reviewing case studies, that type of misinformation could impact a site’s ability to reduce screening failures.
There are also nuances that can be uncovered. For example, when the responses of an individual in an investigator meeting didn’t match the high level of initial understanding the study team anticipated, we were able to identify that the attendee was a last-minute substitute at the meeting and not the principal investigator who had registered.
Assessing site-level knowledge and confidence
Some of our clients have shared that among the most valuable insights they’ve obtained through site-level analysis are:
- Correlation between sites with the most correct polling answers and use of pre-meeting learning materials, such as via an on-demand platform
- Potential misinformation from prior studies or prior training
- Correlation between site experience and knowledge, confidence and/or engagement with training content
Measurement and assessment are always tied to specific meeting goals, such as ensuring knowledge of and confidence in how to use the Electronic Clinical Outcome Assessments platform. Through pre- and post-testing, it’s possible to see whether sites became more confident and knowledgeable about the use of the eCOA by the end of the meeting, after training sessions. Of course, it’s also possible to see if there is a disparity in knowledge and confidence across sites.
Similarly, polling conducted after each session can reveal challenging topics. For example, if a session on an element of the protocol had a larger percentage of incorrect answers, the content may not have been clear enough and additional materials or training would benefit all attendees. However, if only certain sites struggled with this topic, further analysis could be done to determine what other factors may have contributed to this and follow up training can be crafted to address those issues.
Benchmarking across meetings
Benchmarking metrics across a series of meetings helps sponsors improve their training over time, for instance by shining a light on different preparedness across sites or regions. Gaining a benchmark for levels of engagement and knowledge will help identify groups who exceed the benchmark; these are the sites who are focused and likely to take action following the meeting. It’s also valuable to know which regional meetings had less knowledgeable and engaged staff, as those will need additional training and outreach. Other benchmarks our clients have asked us to set are those for logistics and presentation feedback, which they used when making programmatic meeting improvement across a series of investigator meetings.
While the shared goal across investigator meetings is knowledge retention, it’s also critical to be respectful of investigators’ time so they remain motivated to embark on the next steps toward enrollment. Evaluations conducted at each meeting can help identify areas for improvement, such as the duration, specific speakers or content, or logistics like the venue.
Conducting site level analysis can inform your investigator meeting planning and follow up to avoid misinformation that puts trials at risk. What’s more, it can empower you to provide more effective training and realize faster study startup times.