Canadian City Uses Artificial Intelligence to Predict Homelessness

A tent sits under an on-ramp as traffic drives past during morning rush hour in Seattle on
Elaine Thompson/AP Photo

The Canadian city of London, Ontario, is using artificial intelligence (AI) software to predict which residents are most vulnerable to becoming homeless so it can rush emergency financial assistance to them — The city government sees homelessness as a growing problem because of the coronavirus pandemic.

As Reuters explained on Thursday, London’s homeless are moving out of its overcrowded shelters and “sleeping rough” in the outdoors because they fear contracting the virus. The homeless tend to have weakened immune systems and live in poor sanitary conditions, making them a high-risk population. Sleeping in the outdoors is increasingly dangerous as colder weather approaches.

According to London’s information technology staff, the Chronic Homelessness Artificial Intelligence (CHAI) system analyzes various personal data, including participants’ use of city services and how often they sleep in shelters, to determine which are in the greatest danger of becoming “chronic homeless.” 

“According to the model’s predictions, a single male who has stayed in shelters, is older than 52 and has no local family is often at high risk of becoming chronically homeless, especially if he is a veteran or an indigenous person,” the technicians explained. The system was said to be 93 percent accurate during its test run in the spring and summer.

Naturally there were privacy concerns surrounding the system, but its defenders note that participation in the program is voluntary, and randomized ID codes are employed to protect individual privacy. The idea seems to be that operators ensure the data fed into the system are accurate and processed correctly without knowing the names of the subjects involved, while those who administer the aid are told who is at greatest risk of becoming homeless without seeing the detailed and sensitive personal information that was employed to generate the warning.

Reuters still found some digital privacy advocates who worried about the London AI system and its implications for social policy:

“It is paramount to think about not just what our data is used for, but (also) ‘what can our data be used for in the future?’ — and assume whoever holds the data has no scruples,” said Paulo Garcia, assistant professor of computer engineering at Ottawa’s Carleton University.

If a new government came into power looking to cut costs, for example, this information could potentially be used to determine who is taking up large amounts of resources and where funding could be slashed, [University of Ottawa law professor Teresa] Scassa said.

Scassa was alluding to the detail that “chronically homeless” individuals are thought to consume about 12 times as much in government resources as those who are only occasionally homeless. Many government reformers would not consider it sinister or unscrupulous to have a computer system flag heavy users of social welfare programs and try to figure out why they regularly require such extensive assistance.

Canada’s CBC News reported in August that while CHAI’s warnings of chronic homelessness seem fairly accurate, London officials were still “figuring out” how to make use of them. The officials were hopeful that the AI system could help them work backward and determine how people become vulnerable to chronic homelessness.

“If you read about AI in popular culture, the big issue right now is unintended bias or black box models, models that give you a prediction but you don’t know why,” said London’s IT manager, Matt Ross. “We built this from the ground up, ensuring that the model actually can explain exactly why it made the prediction it did, and that’s to do two things: build trust in the model and allow it to be implemented safely and ethically, but also reduce or eliminate unintended bias.”

Ross said a key feature of CHAI is that the system “learns from the data itself” and works out the “patterns that are predictive of homelessness,” rather than being programmed with the assumptions of human operators. The anticipation, in essence, is that the AI would turn its cold analytical eye upon a problem that is difficult for humans to study without preconceptions and emotional bias.

Ross noted in another interview that prototype AI systems for predicting homelessness are being tested in other cities, including Montreal and the U.S. cities of Austin, New York, and Los Angeles, but London’s is believed to be the first that has been effectively “integrated into service delivery.” CHAI was designed to process data from Canada’s federal Homeless Individuals and Families Information System (HIFIS), so its creators believe it could be implemented in other Canadian cities with relative ease.

COMMENTS

Please let us know if you're having issues with commenting.