
In the Aesop’s fable of the Tortoise and the Hare , policy is often the hare and research is the tortoise. New generative AI tools, may offer an easily accessible, quick way for policymakers or practitioners to tap research without waiting for researchers to do a systematic synthesis of the research or even the time it takes to reach out to a researcher to set up a conversation.
Some voices in the evidence production and use field are concerned that generative AI will further blur the line between quality, trusted evidence and poor-quality evidence that is available for decision-making if the researcher isn’t providing that evidence. Others are concerned generative AI will eliminate the need for practitioners or policymakers to partner with researchers because they can get an answer faster through generative AI.
How many of these concerns are warranted? How does generative AI and evidence use connect? There are significant upsides and downsides to generative AI for knowledge brokers at the intersection of research and its use. For example, the human brain can only hold so much information at a given time to recall and apply it to a context. It can take years to develop the depth of expertise to give a quick, deep summary of research findings. Generative AI has the potential to fill in this gap and be a useful tool to knowledge brokers in this way. On the other hand, ChatGPT and other LLMs like Google Notebook , described on their website as a “Research Assistant”, are pulling from the wider internet to provide knowledge to answer requests. The internet, of course, includes both high-quality science along with many sources that are of poor quality or purporting to be research when they are not. At the November Transforming Evidence Network conference in South Africa there was a plenary discussion on transforming evidence through artificial intelligence and other technologies. One key message that speakers shared was the evidence production and use communities need to start thinking about the role of AI and integrating AI into our work or we will be left behind. Since we can’t change or stop AI, we need to think intentionally about the role of a knowledge broker in this context.
As one of my recent blog posts noted, on knowledge brokers are the unsung heroes who translate complex data into actionable insights. They are the ones that translate the information between the research, practice and policy communities and provide that travel guidebook of what to know and how to get around.
One of the key roles that knowledge brokers play in evidence use is building trusted relationships with evidence users. Across studies of research use there is a central core theme of the importance of these trusted relationships. This may be for several reasons. Some literature (PDF) suggests that practitioners prefer a conversation about the evidence to only receiving disseminated materials. Taylor Bishop Scott, the Director of the Research Translation Platform at Penn State’s Research to Policy Collaborative, recently shared with me that their organization experimentally tested the use of tailored emails and intensive partnership-based interactions, and found that personal connections forged through meetings were associated with more research use. There is a human need to connect with other humans and make sense of information.
While using AI tools may be convenient, they likely cannot replace the trust and transparency of personal connection. Farrell and colleagues observed that learning organizations obtain new information, and assimilate it and apply it, typically through social interaction to make sense of that information. A study testing three conditions to facilitate evidence use — online registry of research evidence, tailored messaging, and a knowledge broker -- found the tailored messaging was the most effective but only in those organizations with a high research use culture. In organizations with lower research use culture, the knowledge broker was a key function in knowledge translation, emphasizing the need to work with a person to understand and apply information. To me, all of these examples point in the direction that even in the future of Gen AI providing quick and easy summaries of evidence for decision-makers, there remains a key role for knowledge brokers to play in building those trusted relationships and helping to make sense of the evidence.
Contextualization is a second key role knowledge brokers play related to AI. As we know, evidence is rarely a perfect fit for a specific question. In a recent podcast episode I highlighted an example from Head Start where no one study perfectly answered the policy need but together research from different related areas built a story that provided guidance to shape the policy. Knowledge brokers have a key role in helping decision-makers identify potential evidence that might apply and then facilitating how that evidence might work in a specific context or situation. The more knowledge brokers can focus on deeply understanding the evidence users and refine their skills in how to apply evidence to real-world issues, their value to the evidence use field cannot be replaced by AI.
If trusted relationships and the deep human understanding of context that knowledge brokers have are unlikely to be replaced by generative AI, then how should evidence use and AI come together? I believe knowledge brokers can proactively build tools that complement what AI can do and when knowledge brokering is needed. While still nascent there are a few interesting examples of how AI can be integrated into work to facilitate evidence use:
- The International Society for Technology in Education is piloting a Stretch AI chatbot designed for education practitioners to ask questions and get trusted answers from a bounded, curated set of research that even cites sources. The tool has also been designed , unlike some other more open generative AI tools, to tell users when it cannot help with a specific question to protect against hallucinations, or the AI tool creating answers that are not accurate.
- The Pan-Africa Center for Evidence is developing digital tools to support brokering. At the recent Transforming Evidence Network Conference the team shared one of their digital tools which was designed to provide information but also a nudge to the user when a particular answer might be better gained by engaging a knowledge broker to contextualize the information. The chatbot then provides the contact information to connect with a knowledge broker.
- Multiple presenters at the November Transforming Evidence Network conference, including one of the plenary speakers Frejus Thoto , discussed the potential value of Gen AI to create plain language and more engaging dissemination content of research or evaluation reports. In seconds, generative AI tools can create a hypothetical interview between an AI “journalist” and an AI generated voice of the author of a report or create a brief, plain language podcast summary of a report. These are early but exciting ideas on how AI can be a tool to support knowledge brokering.
- Finally, AI is quickly being integrated into research synthesis. For example the new UK Research and Innovation funding initiative to develop infrastructure that leverages AI to more quickly develop research synthesis to accelerate more timely evidence for decision-makers. There are many potential benefits for AI to streamline parts of evidence synthesis. Others in Australia have emphasized the value of AI in not only streamlining evidence synthesis but identifying connections across bodies of research in way that is difficult for the human brain to do. But even once evidence syntheses are created, the human role remains key to check for accuracy and contextualize the findings.
Rather than only critiquing AI, the evidence use field needs to consider the conditions that facilitate evidence use, and where the role for knowledge brokers remains. We need to make the value of this role clear to potential evidence users, continue to focus on building trusted relationships, and proactively design tools to support brokering activities. Generative AI is just one more way the field of evidence use continues to grow and evolve. We are considering tools here at ACF to provide synthesis using trusted information or leveraging AI to support systematic reviews. We hope these can be valuable to our staff as knowledge brokers, to our program partners, and to others seeking trusted information to support policy development.