Privacy

Planning     Building     Deploying
AI systems often gather personal information that can invade our privacy. Systems storing confidential data can also be vulnerable to cyberattacks that result in devastating data breaches to access personal information.

Have you considered...?

  • Using privacy-enhancing technologies such as federated learning, differential privacy, de-identification, and secure data enclaves based on the level of risk
  • Conducting privacy and security risk assessments, and incorporating privacy by design measures in ethical review processes
  • Requiring affirmative, prospective consent from individuals of the intention to include data about them
  • Allowing individuals to object or withdraw their consent

Case study

To receive welfare benefits, individuals must provide data that is shared across multiple government and commercial databases. This type of personal data can be misused to deny low-income people access to housing or jobs, or target them with predatory loans.

Have you engaged with...?

  • Privacy advocates
  • Legal counsel
  • Cybersecurity experts

Resources

What is missing?

Your suggestions

AI Blindspot Cards

PURPOSE

AI systems should make the world a better place. Defining a shared goal guides decisions across the lifecycle of an algorithmic decision-making system, promoting trust amongst individuals and the public.

REPRESENTATIVE DATA

For an algorithm to be effective, its training data must be representative of the communities that it may impact. The way that you collect and organize data will benefit certain groups while excluding or harming others.

ABUSABILITY

The designers of an AI system need to anticipate vulnerabilities and dual-use scenarios by modeling how bad actors might hijack and weaponize the system for malicious activity.

PRIVACY

AI systems often gather personal information that can invade our privacy. Systems storing confidential data can also be vulnerable to cyberattacks that result in devastating data breaches to access personal information.

DISCRIMINATION BY PROXY

An algorithm can have an adverse effect on vulnerable populations even without explicitly including protected characteristics. This often occurs when a model includes features that are correlated with these characteristics.

EXPLAINABILITY

The technical logic of algorithms is complex, which make recommendations unclear. People involved in designing and deploying algorithmic systems have a responsibility to explain high-stakes decisions that affect individuals' well-being.

OPTIMIZATION CRITERIA

There are trade-offs and potential externalities when determining an AI system's metrics for success. It is important to balance performance metrics against the risk of negatively impacting vulnerable populations.

GENERALIZATION ERROR

Between building and deploying an AI system, conditions in the world may change or not reflect the context in which the system was designed, such that training data are no longer representative.

RIGHT TO CONTEST

Like any human process, AI systems carry biases that make them subjective and imperfect. The right to contest an algorithmic decision can surface inaccuracies and grant agency to people affected.

ABOUT

The AI Blindspot cards were developed by Ania Calderon, Dan Taber, Hong Qu, and Jeff Wen during the Berkman Klein Center and MIT Media Lab’s 2019 Assembly program.

Learn more about the team.

COPYRIGHT

This work is licensed under a Creative Commons Attribution 4.0 International License.