21 - Making Algorithms Work for Reporting
Summary
Abstract
Sophisticated data analysis algorithms can greatly benefit investigative reporting, but most of the work is getting and cleaning data.
Keywords: algorithms, machine learning, computational journalism, data journalism, investigative journalism, data cleaning
The dirty secret of computational journalism is that the “algorithmic” part of a story is not the part that takes all of the time and effort.
Don't misunderstand me: Sophisticated algorithms can be extraordinarily useful in reporting, especially investigative reporting. Machine learning (training computers to find patterns) has been used to find key documents in huge volumes of data. Natural language processing (training computers to understand language) can extract the names of people and companies from documents, giving reporters a shortcut to understanding who's involved in a story. And journalists have used a variety of statistical analyses to detect wrongdoing or bias.
But actually running an algorithm is the easy part. Getting the data, cleaning it and following up algorithmic leads is the hard part.
To illustrate this, let's take a success for machine learning in investigative journalism, The Atlanta Journal-Constitution's remarkable story on sex abuse by doctors, “License to Betray” (Teegardin et al., 2016). Reporters analyzed over 100,000 doctor disciplinary records from every US state, and found 2,400 cases where doctors who had sexually abused patients were allowed to continue to practice. Rather than reading every report, they first drastically reduced this pile by applying machine learning to find reports that were likely to concern sexual abuse. They were able to cut down their pile more than 10 times, to just 6,000 documents, which they then read and reviewed manually.
This could not have been a national story without machine learning, according to reporter Jeff Ernsthausen. “Maybe there's a chance we would have made it a regional story,” he said later (Diakopoulos, 2019).
This is as good a win for algorithms in journalism as we’ve yet seen, and this technique could be used far more widely. But the machine learning itself is not the hard part. The method that Ernsthausen used, “logistic regression,” is a standard statistical approach to classifying documents based on which words they contain. It can be implemented in scarcely a dozen lines of Python, and there are many good tutorials online.
- Type
- Chapter
- Information
- The Data Journalism HandbookTowards A Critical Data Practice, pp. 143 - 146Publisher: Amsterdam University PressPrint publication year: 2021