PCAST Holds Panel Discussion on Science Communication
On March 24, the President’s Council of Advisors on Science and Technology (PCAST) hosted a panel discussion on the topic of improving science communication. In his introductory remarks, Dr. Francis Collins, acting co-chair of PCAST and interim Presidential Science Advisor, noted serious concerns about science communication, specifically with respect to vaccine hesitancy, and stated that PCAST has an opportunity to advise the federal government on how to improve science communication to build public trust. In a nod to previous remarks made by Collins as he stepped down as Director of the National Institutes of Health last year, he acknowledged that he used to think all one needed to do to communicate science to the public was to put the evidence in front of them. He conceded that “trusting the science” is not enough and that the federal government needs help thinking more about this topic, especially as it relates to the spread of misinformation. Turning to the panel, Collins noted that there currently is no dedicated government working group on the topic of science communication broadly, although many existing working groups on other topics have been discussing effective communication. He asked the panel to help PCAST think about what the government should be doing.
The first panelist was Arthur (Skip) Lupia of the University of Michigan. Lupia’s research focuses on how people make decisions with a lack of information. In his presentation, he explained that while evidence-based policy is a popular term at the moment, evidence is not enough for policy because other factors—such as value diversity, positionality, and politics—are at play when developing policy or making recommendations based on science. To increase the likelihood that the public will pay attention and potentially follow policy based on science, messages about science should be (1) immediately relevant to core concerns; that is, connected to people’s thoughts and feelings; (2) consistent with values (e.g., does the information threaten or empower me); and (3) actionable. In addition, Lupia noted that “credibility” is also an important factor, noting that credibility is not inherent, even for scientists, but rather, that credibility is bestowed by the listener or learner. If the scientific information being presented is credible but the messenger is not, the listener may not accept it, and vice versa, said Lupia.
Kathleen Hall Jamieson, University of Pennsylvania, continued the discussion by talking about the need to ensure the integrity of facts in public discourse, using the COVID-19 pandemic as an example. Her presentation included three recommendations: (1) Establish a misconception monitoring, prevalence assessment, and response system for federal health agencies; (2) Make all monitoring, prevalence assessment, and response data available to scholars in real time (i.e., get scholars involved in the process); and (3) Use direct contact with the public to communicate foundational knowledge and bolster trust. Regarding this last point, Jamieson suggests taking basic, foundational knowledge about science and presenting it “every place we touch the public,” such as doctors’ offices, community vaccine clinics, and websites, to name a few. She also called for the federal government to audit the language used by all health agencies in materials to flag and fix instances that increase public susceptibility to misconception.
Consuelo Wilkins, Vanderbilt University, discussed the importance of communicating effectively with diverse communities and building trust. Using COVID-19 as an example, Wilkins discussed her team’s work using community-led teams and success in utilizing community partners to deliver scientific information.
Jessica Hullman, Northwestern University, discussed issues with how statistical data is presented to the public and the need to allow the public to see and understand issues of “uncertainty” in science and statistics. She argued that by deemphasizing uncertainty as a core characteristic of science and statistics, the public is led to incorrectly believe that government information is infallible or certain. As a result, when it is revealed that a data source, such as the Census, contains “uncertainty” or “noisy estimates,” it can be interpreted by the public as inaccurate or untrustworthy data. Hullman argued that “providing obviously imperfect measurements by default avoids the veneer of certitude, normalizing error, and will enable analysts to account for noise in inference.”
PCAST members had lots of questions for the panel, including about the extent to which sociology, psychology, and other social sciences are engaged in studies about communicating science. COSSA will continue to follow PCAST’s work around science communication.