NYC Bias Audit

Our team of experts in the fields of HR, employment, assessment, and AI Ethics have come together to draft a bias audit, in order to aid the NYC Council’s call for bias audit cover automated employment decision tools.


This audit criteria is being proposed for the consideration of any jurisdiction where a bias audit for automated employment decisions tools might be a useful tool for the responsible  regulatory body or enforcement agent in the interest of governance, oversight and accountability over the areas of ethics, bias, privacy, trust and cybersecurity.


This certification scheme is designed for public or private entities of any size.  It may be applied to one or more algorithmic systems, artificial intelligence, or autonomous systems unless exempted by the Relevant Legal Framework.  


AIs and autonomous systems that do NOT contain Personal Data, Personal Identifiable Information (PII), Special Category Data or Proxy Variables are out of scope for this certification scheme. Auditors may provide the scheme, as agreed by contract, for one or more algorithmic systems, artificial intelligence systems or autonomous systems.  Any system specifically listed in the Act is in scope for this certification scheme.  Please see Independent Audit of AI Systems USER GUIDE v1.1 for more details on auditors, auditees and the creation and management of the audit rules and certification schemes.


The Auditee determines the system(s) to which the Auditor will apply the scheme and memorialises this agreement via contract. The Target of Evaluation (ToE) shall be defined by a contract between the auditor and the organisation (auditee).  The Auditee shall contract for any or all data controller/joint-controller/processor ToE that are in scope, including, but not limited to algorithmic systems, artificial intelligence or autonomous systems that include Personal Data. 


The Auditor will only perform an audit to the breadth and scope upon the ToE determined by the contract with the Auditee.  The Auditee bears the responsibility of ensuring that all necessary systems undergo an Audit.  The contract must specify one of the following:

  1. For an individual system certification — The Auditee shall identify the ToE by name/identifier, specifically noting the boundaries of the systems.  Associated systems or adjacent processing must be clearly delineated as “in scope” or “out of scope” by the contract and have defined beginnings and endings. In scope functions can include Data Processors acting on behalf of a data controller. 
  2. Multiple named systems, with the same criteria as #1 for each system or the sum of the combined systems, where the Auditee must delineate the “beginnings and ends”.

Automated employment decision tools encompasses any algorithmic systems, AI or autonomous system designed to search, screen, filter, select or otherwise determine a candidate for further consideration (e.g. interviewing, human review) in the hiring process or directly employ as a function of the system.  

There are certain audit criteria that apply only to Data Controllers. As determined by GDPR and UK GDPR, ForHumanity defines a Data Controller as being the organisation that interfaces with a Data Subject, collects Personal Data, Special Category Data, PII or Proxy Variables, distributes a privacy policy to and is As a result, an organisation that functions solely as a Data Processor will be unable to comply with audit criteria specified for Data Controllers. This will not prevent them from achieving certification.  Data Processors will not need to comply with criteria that applies only to Data Controllers and marked “Controller-only”.  A Data Controller or Joint-Controller must be compliant with all criteria for certification.

Audit Process and Governance

Please read the separate document known as Independent Audit of AI Systems USER GUIDE v1.0.  This document contains all elements of governance for the audit process and an appeals process.


The document transparently details the step-by-step maintenance process for the audits, licensing, oversight, changes, voting, role of the ForHumanity Fellows and all details required to know, in advance how the audits will be operated to assure auditors and auditees that the system will be reliable and predictable.

Risk-based/SME (Small and Medium Enterprises)

This certification scheme recognises that Small and Medium Enterprises (SMEs) may be overly burdened by the size and scope of the audit rules.  Explanatory note 26 specifically comments on the risks to be considered, evaluated and mitigated in an Algorithmic Risk AnalysisThis analysis in the certification scheme identifies impacted stakeholders for the severity and likelihood of algorithmic impact.  


SPECIAL NOTE for SMEs with fewer than 25 employees.  All requirements referencing a “committee or committees” may be fulfilled by  a single person. However, the SME is strongly encouraged to meet the demands of the committee requirement via external, voluntary oversight boards.  This can enable a SME to seek diverse and objective inputs on algorithmic development.