AI, Work & Government

How to govern and be governed by AI?

Corporations and government bureaucracies are often likened to machines, with individual workers and bureaucrats being merely cogs. In the future, however, both corporations and governments may become machine-operated in a more literal sense. AI algorithms, powered by ever-growing data, provide opportunities for automating many decisions, from hiring and firing employees, to allocating government welfare benefits. 

Algorithmic decision making in government and business provides immense opportunities for faster, more economically efficient outcomes. It also has the potential to produce socially desirable outcomes that eliminate human error and bias. At the same time, algorithmic decision-making systems raise the possibility of perpetuating entrenched problems. For example, consider an AI system trained to evaluate resumes for short-listing candidates more rapidly. If this system is trained on biased data, it would simply reproduce such biases, rather than select the best candidates for the job.

Another question that such systems raise is how societies agree on the objective function for which the algorithm optimizes. Economic and social objectives often conflict with one another, and are prioritized in different ways based on one’s beliefs and political ideology. This necessitates a mechanism for identifying such conflicts in values, and for subsequently resolving them to produce acceptable outcomes. 

This research area explores the promises and challenges that arise when AI systems are used to govern or manage people, whether in a political realm or a commercial context.


Sample Projects

Previous studies on people’s attitudes toward algorithmic management yield inconsistent findings, and field experiments on crowdsourced marketplaces can overcome their methodological limitations.
We need conceptual frameworks for designing new governance architectures for human-machine social systems.
We show that the way we talk about AI-generated art impacts how credit and responsibility are allocated to human stakeholders.
We report on a randomized experiment designed to study the effect of exposure to media manipulations on over 15,000 individuals' ability to discern machine-manipulated media.

Explore other research themes

Go to Editor View