Identifying and Engaging with Stakeholders for an AI (AAA) System Project

We welcome all feedback and recommendations for improvement.

Exec Summary

This document provides comprehensive guidance for identifying and engaging stakeholders in AI system projects, moving beyond traditional shareholder-focused approaches to embrace a holistic stakeholder framework that addresses the critical gap between technical AI development and broader societal impacts. The methodology establishes ForHumanity’s definition of stakeholders as encompassing both direct stakeholders such as internal employees, customers, and regulatory bodies, and indirect stakeholders including society, non-profit organizations, and the environment. This comprehensive approach recognizes that AI systems create ripple effects far beyond their immediate operational context, tracing the evolution from 1980s shareholder primacy toward broader stakeholder accountability as corporations increasingly recognize they cannot operate in isolation from their environmental and social impacts.

The document introduces a structured “double diamond” methodology adapted for AI projects, featuring four sequential phases that move from initial stakeholder discovery through detailed analysis, systematic categorization, and final integration into AI system governance. This process emphasizes treating any internally created stakeholder list as inherently incomplete and “always open” to expansion, while gathering comprehensive information about each stakeholder’s role, interests, influence, and stance to enable strategic prioritization. The methodology employs three primary visual frameworks for stakeholder mapping: the Onion Framework organizing stakeholders in concentric rings from development teams to society-level impacts, Graph Network Diagrams providing computer-readable representations of complex relationships, and Stakeholder Matrices enabling analysis based on paired characteristics such as power-interest and knowledge-support dynamics.

The framework identifies five critical stakeholder categories ranging from Team Level personnel who shape day-to-day development decisions, through Organization Level executives and investors, to Ecosystem Level customers and regulators, Directly Affected Consumers who experience impacts but may lack design influence, and Society Level stakeholders including government agencies and marginalized communities who face broad societal consequences. Particular attention is dedicated to environmental stakeholders, recognizing both substantial negative impacts from AI energy consumption projected to reach 0.5% of global electricity generation by 2027, and potential positive contributions to conservation, climate change mitigation, and ecosystem monitoring through optimized resource management and environmental data analysis.

The methodology emphasizes three core implementation principles of inclusivity, people-centered design, and iterative refinement, while stressing the importance of statistical validity in stakeholder representation and continuous education for technical stakeholders in AI ethics and security. The framework reveals potential friction points between technically-oriented AI companies and non-technical stakeholders in government agencies and non-profit organizations, where disconnects in technical expertise can lead to inappropriate deployment or inadequate risk management. This analysis suggests that effective AI governance requires stronger board independence and a return to broader organizational accountability, moving away from narrow profit maximization toward comprehensive stakeholder consideration.

This stakeholder identification and engagement framework provides essential infrastructure for responsible AI development by systematically mapping stakeholder relationships, power dynamics, and interests to help organizations better anticipate and address the wide-ranging impacts of AI systems. The methodology supports the development of AI systems that are not only technically sound but also ethically grounded and socially beneficial, with practical applications ranging from AI startups to large enterprises and government agencies. As AI systems become increasingly integrated into society, this comprehensive approach to stakeholder engagement becomes not just advisable but essential for sustainable and responsible AI development that serves the interests of all affected parties while maintaining accountability for environmental and social impacts.

For a complete set of practical guidance materials, please visit our library