News

How ‘Big Data’ Might Help Predict Which Children Are Most At Risk For Abuse

Currently, 3 million kids are investigated for maltreatment every year — 700,000 of those cases are substantiated

Each day, social workers must decide whether or not the children they visit should be removed from their parents’ homes. It’s a decision that changes the courses of those kids’ lives.

During a recent episode of  KERA’s “Think,” Naomi Schaefer Riley, a visiting fellow at the American Enterprise Institute, talked about how we can better harness statistical information to help make these decisions.

The idea of using “big data” to analyze risk isn’t a brand new concept, but better algorithms are allowing it to be used in new ways.

On the “overwhelming” number of kids in the system

Currently, 3 million kids are investigated for maltreatment every year — 700,000 of those cases are substantiated.

Riley says about one in three children younger than 18 have had some contact with the child welfare system.

“I think we can safely say that a lot of those cases are obviously not substantiated,” she says. “It means that a lot of kids are having contact with child welfare when there’s absolutely no need for it.”

Workers in the American child welfare system are being completely overwhelmed, she says.

“There’s basically a fire hose of reports being thrown at them and the question is: How can we really expect them reasonably to sort through them?” she says.

On using predictive analytics to determine abuse risk

Data on families is available from various institutions, like schools and the welfare and health care systems. In the past few years, the systems have started talking to each other.

“Suddenly, people who were looking at child welfare could actually access data from schools and could actually access data from medical records or data from welfare benefits,” she said.

Riley says this data can then be used in an algorithm to generate a “score” for a family, measuring the likelihood that a child would be subject to abuse.

“Once a score is spit out, it’s not ‘Oh, they scored a 10; let’s go remove the child from the home.’ Rather it’s actually a way of determining for a case worker how urgent this case is to be looked at,” she said.

On the concern this data would lead to over-reporting in minority communities

Poverty correlates with child abuse and neglect, and minority neighborhoods tend to be poorer. The risk of abuse increases with economic instability in the home, Riley says. Parents and children tend to have less stable relationships when they’re stressed over money.

Investigations of abuse and abuse itself are more prevalent in these conditions, she says.

Riley says data might not be the best or even most objective system to determine the risk of a household, but individual caseworkers can bring their own racial bias to the situation, too.

“When you compare [data] with just sending a particular person with their particular biases into a home to look around and see what they think, I think that raises all sorts of problems as well,” she says. “Problems that might be worse because there’s no one looking at the big picture here.”

Riley’s story “Can Big Data Help Save Abused Kids?” appears in Reason magazine.

Share