Joint Artificial Intelligence Center (JAIC)

In october 2020, the JAIC named ForHumanity a trusted ecosystem partner in AI ethics. We are preparing to adapt our process of independent audit of AI systems into the AI ethics principles of the US Department of Defense (DoD).

Setting the Context

Before the “technological seeds” are planted in Nand Mulchandani’s “AI Victory Gardens,” DoD employees and partners will want to understand and embed the basics of responsible AI rules and standards.  ForHumanity can be part of the JAIC community that provides customized nose-to-tail coordination on behalf of the Department of Defense, a cohort of DoD contractors, and other interested parties.  Should the JAIC choose to deploy Independent Audit of AI Systems as a uniform standard of responsible AI, it will benefit from ForHumanity’s partnership with the Autonomy Observatory, Workshop and Laboratory (OWL) at Johns Hopkins University’s (JHU) Institute for Assured Autonomy (IAA). The IAA is a research institute jointly run by JHU’s Applied Physics Laboratory (APL) and JHU’s Whiting School of Engineering. This partnership will allow us to take advantage of APL’s extensive DoD relationships to ensure that we have rapid cooperation from the services, the intelligence community and other defense contractors. With help from IAA and APL, which is deeply engaged across the full-range of the defense ecosystem, ForHumanity can facilitate interactive consultation and review of the audit with relevant stakeholders and directly address unique requirements of the sector. ForHumanity’s expert team of AI ethicists will guide this cohort and establish the rules and standards uniformly for all artificial intelligence and autonomous systems impacting humans and procured by DoD.

Maturity Models, Use Cases and Libraries

ForHumanity maintains a Body of Knowledge repository and specific knowledge stores clarifying key elements of audit compliance.  These stores are available to all accredited auditors who are looking to verify audit compliance against minimum standards. We establish maturity models for sufficient, mature and insufficient documentation, training and expertise across 40+ areas of audit compliance from design to decommission.  These knowledge stores are available to all ForHumanity Contributors and FHCAs and will track use cases and anonymous insufficiencies. (RAI Governance – IT1, AI Product and acquisition lifecycle – IT3 and RAI Requirement validation – IT4)

Explainability

Adopting a similar to DARPA’s focus on Explainable AI (XAI), ForHumanity believes that explainability is a core tenet of governance, accountability and oversight.  Some early governance models over similar or related AI systems recognized the importance of explainability and its significance when it comes to how institutions make decisions regarding the use of systems that can impact humans.

While explainability may be difficult or impossible in certain instances of complex neural networks or deep reinforcement learning systems, explainability does not require exhaustingly granular recitations of precise decision pathways.  Rather the adversely affected individual must be able to access a better understanding of the decision-making process.  If an entity cannot provide such an explanation, that entity will struggle to earn the trust of the warfighter. (Traceable – EAI3)

ForHumanity takes a three-fold approach to Explainability.  Our framework for executing explainability establishes:

1)sufficient standards requiring clear differentiation between technical explainability and instances of ethical choice 

2) identifying documentation and transparency criteria and 

3) establishing “reasonableness” in the manner in which “explanations” are delivered to the public.  

These audit criteria require documentation, proof of compliance, and sometimes outright disclosures to the JAIC or DoD.