Summer of AI: How can we talk about AI across audiences?

Back To News

Given the massive amounts of data used to fuel AI models, it’s abundantly clear that AI and privacy are inextricably linked. Explaining that connection to different audiences, however, is not so obvious, and policymakers and media are some of the most important stakeholders that need to understand the overlap so they can get it right. 

This challenge was one of many considered during the Future of Privacy Forum’s inaugural DC Privacy Forum: AI Forward and Annual Advisory Board Conference in Washington, DC. The longstanding and respected privacy-focused nonprofit includes 220+ member companies and privacy executives from leading global companies across a variety of industries and sectors. 

FPF celebrated its 15th anniversary at the three-day conference, which was especially timely given where FPF started (online privacy) and where it stands in 2024, at the intersection of privacy and AI. It is because of these high stakes that the organization launched the FPF Center for Artificial Intelligence to better serve policymakers, companies, nonprofit organizations, civil society and academics as they try to navigate the choppy waters of AI policy and governance. Glen Echo Group CEO Maura Colleton Corbett delivered opening remarks, then facilitated a panel on “Effectively Communicating About Privacy to Consumers, Media, and Policymakers”. The room was full of privacy professionals eager to learn more about media trends in tech policy, AI and privacy. Fifteen years ago that wouldn’t have been the case, but it sure is now, showing not only interest from the industry but also the growing pressure on privacy professionals to figure this out.

Maura’s remarks highlighted the enormous challenge of understanding ― and helping others understand ― complex tech policy issues with wide impacts in simple and relevant terms:

“It is critical, especially in today’s fractured media environment, that those covering tech actually understand it. The media plays such an important role in shaping the larger narrative and the public’s understanding of any given subject. Emerging technologies ― like AI, the subject of so much discussion today ― are already changing our lives in so many ways. Education is key. How we communicate about new technologies matters immensely as far as building public trust, adoption and regulation.”

Maura went on to describe how AI is particularly challenging because it touches all aspects of policy – from civil rights to data privacy and copyright. The remarks moved into a panel discussion that featured Anne J. Flanagan (Vice President for Artificial intelligence at FPF) and top tech reporters Cecilia Kang and Maria Curi from The New York Times and Axios, respectively. 

The panelists stressed the importance of understanding the intersection of privacy and AI and how that impact can be measured. Maria and Cecilia both emphasized how important it is to talk with leading experts in order to inform accurate reporting. Maura then addressed the evergreen challenge of educating consumers on complex technology and policy issues, especially when the stakes have never been higher. 

Enter FPF’s newly launched AI Center, which will play a role in that education, expanding FPF’s existing AI work, advancing large-scale novel research projects, and serving as a source for trusted, nuanced, nonpartisan and practical expertise. As with any new technology, trust is integral to adoption and on that topic, everyone in the room agreed.

Maria specifically emphasized the importance of mutual trust and constant communication in order to report on fast-moving tech policy in a constantly breaking news environment. “It all comes down to the relationship you have, and literally texting people as things are breaking or developing and you're trying to confirm things or find out what's next,” she explained. “It's about trust too. I understand that the people I'm talking to maybe don't want to run the risk of having something that could have come out in a publication that's going to be read by thousands of people.“

Maura and the other speakers consistently returned to the point that while AI has been around for a long time, accelerated innovation and the arrival of ChatGPT made AI appear to happen slowly and then all at once. As Maura has said many times, hype is not an honest educator, and that not only makes it difficult for the media to understand the nuances between different models, but also to figure out what is hyperbole and what is real.

What we do know is that these questions will persist, especially as AI becomes more prevalent and more complicated. Journalists will continue to seek out information to explain these issues in a way that readers will understand and engage with, and that can inform policymakers on the front lines of regulation. Relationships with media are built on trust, not transactional communication, and understanding where they sit, how they approach coverage of these issues and how their editors impact that coverage (or not) will be increasingly critical to positioning your issue, product or company. 

If you’re interested in exploring how to build, shape and maintain these relationships, let’s work together.