Image default
Delivering Digital

Putting Humans at the Center of Artificial Intelligence

The new Stanford Institute for Human-Centered Artificial Intelligence hopes to guide the future of AI to positively impact people and society

As momentum accelerates to embed artificial intelligence (AI) in business, government, healthcare, and even art, a troubling trend keeps bubbling up. In the pursuit of eliminating the human from the machinery of AI, researchers and developers are also often eliminating the humanity from its applied use out in the world.

More frequently than not, the ethical and societal implications of AI are ignored in the pursuit of technological and financial advancements.

As a result, a body of technology initially conceived to benefit people is drifting into darker and darker territory.  AI is improved for the sake of AI, with little regard to human impact. A new interdisciplinary group of researchers at Stanford University want to re-center that focus back on the humans the technology is meant to help.

Yesterday, they launched the Stanford Institute of Human-Centered Artificial Intelligence (HAI), aims to lead technologists, policymakers, business leaders, and academics to take a human-first approach to AI.

Equal parts think-tank and research institution, HAI will run on three foundational principles, says John Etchemendy, co-director of HAI and a professor of philosophy at Stanford.

“First, is a bet that the future of AI technology is going to be inspired by our understanding of human intelligence. The second is that the technology has to be guided by our understanding of how it is impacting humans and society,” he says. “And, third, AI applications should be designed so that they enhance and augment what humans can do.”

Key to all of this is trying to rope in a better cross-section of AI stakeholders into not only the policy discussions, but the research itself, says Fei-Fei Li, co-director of HAI, professor of computer science at Stanford and the former chief scientist of AI at Google Cloud.

“Humanity is the key to building a positive future for AI,” she says. “In order to train AI to benefit humanity, the creators of AI need to represent humanity.”

The roots of HAI came from Li’s realization that even though AI is a technology that has the potential to change history for all of us, it was largely being created by a narrow group of people, namely consisting of a “guys in hoodies.”

To do AI right, “requires a true diversity of thoughts across gender, age, ethnicity, and cultural background, as well as a diverse representation from different disciplines, from engineering, robotics, and statistics to philosophy, economics, anthropology, law and many more,” she says.

Though HAI just launched officially yesterday, the institute’s work is already well underway. Its faculty have started about 50 research projects funded by HAI on a broad range of topics, from research on bridging the gap between AI and neuroscience to studies of legal and regulatory implications in an AI-intensive world.

It’s also started by sponsoring symposia on topics like the future of work and AI’s impact on humanities and the arts. In conjunction with its launch, yesterday it ran the 2019 Human-Centered Artificial Intelligence Symposium, featuring luminaries in AI and keynote remarks from Bill Gates. The entire program was livestreamed and can be found on YouTube for those interested in learning more:

Related posts

Enterprises Struggling with Machine Learning Should Ask Themselves One Question

Ericka Chickowski

Measuring Enterprise Cloud Growth and Growing Pains

Ericka Chickowski

Healthcare is Increasingly a Data Business

George V. Hulme

Leave a Comment