
Artificial intelligence (AI) is the No. 1 issue communicators feel pressure to address in 2025. Engaging with AI carries a multitude of reputational risks for companies, due to public concerns over job security, data privacy, and ethics. Gravity Research’s latest report outlines what executives need to know to navigate these emerging challenges.
Key Takeaways
AI Engagement: A Top Executive Focus
52% of executives feel pressure to engage with AI, making it the No. 1 societal issue of concern. Consumers are largely driving this pressure, seeking clarity on how and where companies are using AI. However, addressing corporate AI use could spark public distrust and criticism of corporate adoption.
3 Risk Categories: Social Issue Magnification, Employee Distrust, and Consumer Impact
Gravity Research identified three, non-mutually exclusive AI risk categories for corporations to be mindful of: AI exacerbating existing social issue risk, such as identity-based corporate discrimination; workforce unease around AI implementation, particularly job displacement and skill devaluation; and the effects of AI on the consumer experience, including privacy and ethical considerations.
3 Case Studies to Illustrate How Brands Are Navigating Risk Categories
Gravity Research unpacks exactly how three leading brands have navigated each AI risk category, including stakeholder backlash, corporate response, and takeaways that communicators can apply to their own AI risk planning.
As the AI risk landscape continually evolves, Gravity Research provides real-time analysis to help businesses navigate emerging challenges. Download the full report to stay ahead.