Her journalism career has taken her far beyond the newsroom, but when Jessica Davis looks at how artificial intelligence (AI) is changing the media industry, her reporter’s instincts kick in.
As Vice President of News Automation and AI Product at Gannett, Davis knows that AI’s impact is not a story about tools. It’s a story about human beings and how they grapple with change. Her job is about helping lead people through that change by establishing strong AI governance and fostering ongoing learning and discussion across Gannett.
Davis shared some of her lessons learned at the recent Online News Association (ONA) conference in New Orleans, and they could be applied across publishers, both large and small:
1. Confront negative perceptions and attitudes — and monitor how they change
“AI readiness and fluency is really important when it comes to the foundation of mindsets — how people think about AI and how it affects their work,” Davis says.
Without any experience using AI or sufficient training, in other words, journalists and other media professionals may worry about how it will affect them. And while tech companies and CEOs may be bullish about productivity and efficiency gains, the outlook for frontline employees may be grim.
“It’s important to acknowledge the feelings about job loss,” Davis says. “How you work on that is through storytelling about how AI benefits the work.”
Gannett ran surveys with employees at the outset of introducing AI to better understand their perceptions and attitudes. Perhaps more importantly, Davis says the company did follow-up research as AI pilot projects and deployments began rolling out. She said the data showed employees were less worried than before.
“There’s still more work to be done, and that’s okay,” she says. “The technology is changing almost every day.”
2. Analyze AI use cases through a risk/value lens
For the past two years, Davis has been helping lead Gannett’s AI council, which brings together representatives from editorial and other parts of the company. This is a place to hear what AI projects are underway, what’s on the table, and to ask questions.
“It’s a popular meeting,” she says. “It’s not about being the ‘No’ police. It’s more about, ‘Hey, how do we support you as an organization to make sure you do this responsibly and well?’”
Over time, Davis says Gannett’s AI council has learned to look at ideas and proposals to use the technology through a risk/value matrix. Where the risks are low and the potential benefits are high, for instance, there’s a sort of “fast lane” process to move forward. Where risks are higher, there’s a separate process that allows for more investigation, testing, and feedback.
3. Tailor AI learning journeys based on roles and needs
Gannett’s AI council includes HR people, who Davis credits with helping establish AI training that reflects the nuances of individual people across the organization.
For senior leaders, Davis described AI learning and development as content that breaks down “What you’d like the boss to know about AI.” For rank-and-file employees, Gannett has introduced a program called AI Explore, which explains responsible use while identifying specific starting points and examples for weaving the technology into everyday workflows.
It’s an approach that’s working: Gannett initially saw a 56% monthly average users of AI tools like Microsoft CoPilot before rolling out AI learning journeys. Today, adoption has risen to more than 93%. Davis and her team have encouraged “super users” who can share knowledge with their peers and have hosted “prompt-a-thons” to brainstorm smart questions to ask generative AI tools.
4. Dig deeper into the data to build trust and momentum
Gannett developed a customized field within WordPress VIP that editors could use to let AI quickly summarize articles into “Key Points” that would be placed near the top. The problem was that the company was only seeing 1% of its team actually use it.
Rather than give up, Davis says Gannett used Parse.ly to look more closely at that 1%. They learned that in some cases, the teams that were using AI to write Key Points were seeing a 40% lift in engaged time. As more people saw those numbers, Davis said the company saw adoption rise to 30% quickly.
“People were afraid the summary would mean the audience would not read their story,” she says. “We gained insights to show that wasn’t correct, and that the value was there if you used it responsibly.”
5. Combine appropriate AI usage with transparency
Davis says Gannett’s research has found that their audience doesn’t want AI to be a big part of the news experience. That’s one of the reasons they’ve limited using AI to actually produce content, with one notable exception.
Most newsrooms want their reporters focused on big, breaking stories, which means there are often not enough people to cover some relevant local news. This comes to Gannett through press releases, notices from libraries, and community groups. AI can easily rewrite that content and bring it in line with Gannett’s editorial standards, allowing the company to provide more hyper-local news.
That said, Gannet’s audience is never in doubt about how AI was involved. Davis says AI-assisted reporting is always disclosed, and readers are informed that editors continue to review such stories just as they would copy from human reporters.
This doesn’t just leave editorial staff free to pursue exclusives and scoops. It also allows Gannett to use Parse.ly to measure engaged time for local content and how it influences behaviors like subscriptions. Employees, meanwhile, have become more interested in other AI features and benefits.
“We have a lot of newsrooms raising their hands right now wanting to get help [from AI] with the metadata that is required to publish,” she says. “It is an accelerant if you use it right — and well.”
Applying AI governance to agentic capabilities
Most of Gannett’s AI projects so far have involved generative AI, which is great for summarizing, organizing, and surfacing insights from content. The next wave is agentic AI, where the technology can autonomously perform actions on a human’s behalf.
Gannett is already testing AI agents in areas like public records requests, using the technology to fill out forms to speed up the process for reporters. It’s an exciting area, Davis says, but it requires close collaboration with Gannett’s legal team and recognizing the risk that requests could be declined if AI makes any errors.
Though agentic AI offers significantly more powerful capabilities, Davis says Gannett will continue on the AI governance path it has already laid out. It starts with consistently asking what success with AI looks like and where “human in the loop” needs to come in. Then it’s a matter of educating employees about policies and procedures through a combination of asynchronous learning, live calls, and the freedom to play with the tools.
When you apply this approach consistently, Davis says employees shift from seeing AI as a threat to something that empowers them.
“It’s amazing to see how quickly the lightbulb comes on,” she says. “That is really fun.”
Author

Shane Schick
Founder, 360 Magazine
Shane Schick is a longtime technology journalist serving business leaders ranging from CIOs and CMOs to CEOs. His work has appeared in Yahoo Finance, the Globe & Mail and many other publications. Shane is currently the founder of a customer experience design publication called 360 Magazine. He lives in Toronto.




