A discovery process for spotting unconscious biases and structural inequalities in AI systems

What are AI Blindspots?


AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. Like any blindspot, AI blindspots are universal -- nobody is immune to them -- but harm can be mitigated if we intentionally take action to guard against them.

Organize an "AL" workshop









PLANNING

In the initial stages of your project, it is important to think critically about: why you want to use a particular technology (Purpose); how accurately your data reflects affected communities (Representative Data); what vulnerabilities your system might expose (Abusability); and how to safeguard personal identifiable information (Privacy).

BUILDING

Vulnerable populations can be harmed due to the performance metric you choose (Optimization Criteria) or variables that act as proxies (Discrimination by Proxy). Depending on the sensitivity of the use case, you may need to understand and explain how the algorithm makes determinations (Explainability).

DEPLOYING

You should be vigilant about monitoring for changes that might affect the performance and impact of your system (Generalization Error), and ensure that individuals have mechanisms to challenge decisions (Right to Contest).

MONITORING

Organizations using AI systems should institute inclusive processes for stakeholders input (Consultation) and independent risk assessment (Oversight). The best way to catch blindspots is to genuinely engage with experts and affected communities as equals to define and track progress towards collective goals (Purpose)..

What do we mean by AI?


Artificial intelligence has become a catch-all category of automated decision making systems that derive patterns, insights, and predictions from big datasets. While they might aspire to emulate and automate intelligent human-like judgment, most algorithms referred to as AI are in fact imperfect models susceptible to making erroneous inferences and rendering biased decisions.

The risk of delegating high-stakes social and commercial decisions to AI exposes everyone to unequal treatment because these seemingly impartial algorithms are produced by computer scientists, engineers, and companies whose data and practices may amplify historical biases in society.

Fairness requires thoughtful vigilance across all sectors, especially from researchers inventing, engineers building, organizations deploying, and advocates tracking AI systems. Above all, we need to safeguard and uplift people whose lives are affected by AI.

Organize an "AL" workshop

AI Blindspot Cards



PURPOSE

AI systems should make the world a better place. Defining a shared goal guides decisions across the lifecycle of an algorithmic decision-making system, promoting trust amongst individuals and the public.

REPRESENTATIVE DATA

For an algorithm to be effective, its training data must be representative of the communities that it may impact. The way that you collect and organize data will benefit certain groups while excluding or harming others.

ABUSABILITY

The designers of an AI system need to anticipate vulnerabilities and dual-use scenarios by modeling how bad actors might hijack and weaponize the system for malicious activity.

PRIVACY

AI systems often gather personal information that can invade our privacy. Systems storing confidential data can also be vulnerable to cyberattacks that result in devastating data breaches to access personal information.

DISCRIMINATION BY PROXY

An algorithm can have an adverse effect on vulnerable populations even without explicitly including protected characteristics. This often occurs when a model includes features that are correlated with these characteristics.

EXPLAINABILITY

The technical logic of algorithms is complex, which make recommendations unclear. People involved in designing and deploying algorithmic systems have a responsibility to explain high-stakes decisions that affect individuals' well-being.

OPTIMIZATION CRITERIA

There are trade-offs and potential externalities when determining an AI system's metrics for success. It is important to balance performance metrics against the risk of negatively impacting vulnerable populations.

GENERALIZATION ERROR

Between building and deploying an AI system, conditions in the world may change or not reflect the context in which the system was designed, such that training data are no longer representative.

RIGHT TO CONTEST

Like any human process, AI systems carry biases that make them subjective and imperfect. The right to contest an algorithmic decision can surface inaccuracies and grant agency to people affected.

OVERSIGHT

Ethical principles, standards, and policies are futile unless monitored and enforced. A diverse oversight body vested with formal authority can help to establish and maintain transparency, accountability, and sanctions.

CONSULTATION

The first, last, and every step in-between should include public participation. AI practitioners must enable meaningful input, explanations, and disclosures to ensure that AI systems promote human flourishing and mitigate harms.

BLANK TEMPLATE

Create a new card.

ABOUT

The AI Blindspot cards were developed by Ania Calderon, Dan Taber, Hong Qu, and Jeff Wen during the Berkman Klein Center Assembly program.

Learn more about the team.

COPYRIGHT

This work is licensed under a Creative Commons Attribution 4.0 International License.