Subscribe To Our Newsletter

Get tips and tools to tell your data story better.

No, thanks

 In Current Events, Data Analysis Concepts Simplified, How To

Social sector organizations are often under intense pressure to make decisions more effectively and efficiently. The demand for evidence-based decision-making — and programming — is ever rising. And with good reason. But as we automate more and more decisions, the importance of using open algorithms can’t be understated.

Why do open algorithms matter so much?

Algorithms used to make social sector decisions must be handled very differently than those employed in the corporate sector. If they’re not, the teams using them run the risk of hurting the very people they want to help.

How Decision-Making Algorithms Work

When Netflix suggests that you watch Grace and Frankie after you’ve finished Love, it uses an algorithm to decide what you’re likely to want to watch next. And when Google shows you one search result ahead of another, an algorithm made the decision that one page was more important. Oh, and when a photo app decides you’d look better with lighter skin, a seriously biased algorithm (developed by a real person) made that call.

So how do those algorithms do it? Because it’s not just random.

Algorithms are sets of rules that computers follow to solve problems or make decisions about a particular course of action. And those rules are becoming a bigger and bigger part of our lives. From the type of information we get, the data others can see about us, the jobs we’re hired for, whether or not our credit card applications are approved, and even whether or not driverless cars see us crossing the street in front of them — all those decisions are made by algorithms.

What’s the problem then?

The inherent problem with algorithms begins at the most basic level — and persists throughout its use. Human bias gets baked into machine-based decision-makers when they are built.

Is Your Computer Biased?

Data-based decision-making can be very helpful for choosing optimal medical treatments, optimizing spending, and determining the most effective programs offered to support vulnerable populations. They are efficient and effective. But we often make the mistake of assuming they are also unbiased.

If we’re not careful, these same algorithms can perpetuate racism, introduce bias, and make life harder for the very people who most need our support. And that’s why the social sector must use open algorithms. The rules used to make decisions need to be transparent.

In When Algorithms Run the Government, Andrew Means highlights the importance of using open algorithms in the social and public sectors. He points to a Wisconsin court that used a proprietary algorithm to determine the likelihood those charged with crimes would re-offend. At first blush, this seems reasonable — good, even. The use of data to make this decision could reduce the chance of sentencing being based on a defendant’s age, race, or ethnic background.

But those very biases can exist in the algorithm. And when we see a judge always impose harsher sentences on black defendants, we might cry racism. But when a computer does it, we shrug and say, “Well, the numbers don’t lie.”

“Should race be a factor in determining the likelihood a person will commit a crime? What about gender? Should social media, which not everyone uses, be the data source to identify cases of food poisoning?”

Why “Black Box” Algorithms Aren’t Cool

Not everyone thinks open algorithms are important. Google has gone to great lengths to keep their algorithms proprietary and convince people in power that understanding the rules isn’t really necessary.

“Many technologies in society operate as ‘black boxes’ to the user – microwaves, automobiles, and lights – and are largely trusted and relied upon without the need for the intricacies of these devices to be understood by the user.”

Yes, most of us use cars without understanding how they work. But our cars aren’t perpetuating stereotypes or biases that could significantly disadvantage those around us. We don’t use our microwaves to make life-altering decisions. When we flip a switch, the light turns on — regardless of our race, age, or gender.

Imagine your algorithm was a person. And you trusted that person entirely to make decisions about how your organization should use its resources. Or who should be able to access your programs. Wouldn’t you want to know what criteria that person used to make their decisions? Of course you would.

Well, your algorithm is, in a way, just a person. It was built by someone, somewhere. And it carries that person’s biases, experiences, and prejudices — which are not universal. So the only way to ensure your algorithm is not harming people you mean to help is if you are able to examine how it’s making decisions.

Want to Know More?

Worried your data is hiding bias? Need help developing open algorithms? If you need help with data collection, analysis, or visualization, call the team at Datassist now.

Recommended Posts

Start typing and press Enter to search

Data collection methods matter.Mobile data collection is great — except when it’s not.