The first time I ever learned about black boxes was in a computer science class fall semester of my freshman year. At first, they were described to us in terms of the algorithms they were teaching us. We didn’t know enough about the way code worked for them to teach us what specifically the code did, so it’s effects were generalized, and we were told to trust it to take our inputs to get the outputs we wanted. Later, the term was expanded to mean any system that was so complex that it is nearly, or completely, impossible to understand the inner workings of. The outputs of such systems are often trusted, but it is difficult to actually determine their accuracy.
This inability to verify results from black box systems, and the decision making behind them, proves a real-world issue. My first experience looking into the issues of obfuscating processes with black boxes was in a computing ethics class I took, where I researched the use of algorithms to determine a felons risk of reoffending. These algorithmic verdicts were then used by judges as an extra piece of evidence, but due to something called “algorithmic bias” – essentially a persons tendency to trust computers even if there’s no real evidence to prove the computer is correct. This use of an algorithmic black box in the judicial decision making process unavoidably obfuscates the judicial process. This violates the defendants right to due process as there is no longer a traceable path of reasoning between the evidence and the final decision.
I’d spent a lot of time learning about black boxes, but I’d never really considered really considered what a black box would look like in a non-computing context – which I’ll admit is a bit silly considering it is essentially the exact same thing. The reading Opening the “Black Box” of Climate Change Science was my first look into the application of the idea of a black box in a wider context.
Black boxes aren’t necessarily sinister, sometimes processes are simply too complex to be understood, but when decisions involving answers from black boxes are stripped from the context that they have resulted from black boxes, that the outcome is not necessarily understood and is divorced from that context, they can become harmful. Just as they obfuscate judicial decisions, their introduction into any decision making process wherein the participants aren’t privy to the inner workings of said black box thereby makes that process opaque at best, and harmful at worst.
Black boxes in sustainability have tangible effects on sustainability. The black box of the production process means that it is difficult or nearly impossible for the everyday consumer to discern whether or not a product is actually sustainable. This obfuscation is often used at the companies benefit, allowing them to make persuasive claims as to why their product should be purchased without the consumer being able to verify them without, at the very least, researching the product a great deal. This was proved clear to all of us through our ecofriendly product reviews where we were forced to tackle the question “Are these products actually sustainable?”.
These black boxes can effect not only consumer decisions, but also ones with much more weight. Government decisions and regulations are one such, especially as they come to a vote; if the black box cannot be understood by those who are voting on the decisions, for better or for worse the outcomes they are voting for and their impacts undeniably obfuscated. This is not to say that the public should not vote in these matters, but simply a concession that must be made whenever black boxes are involved. Because the truth is if these black boxes are obfuscating the steps of a process, or is being used to purposefully hide them, and we don’t know these steps are happening, we can’t do anything to change them.
Richard D. Besel (2011) Opening the “Black Box” of Climate Change Science: Actor-Network Theory and Rhetorical Practice in Scientific Controversies, Southern Communication Journal, 76:2, 120-136, DOI: 10.1080/10417941003642403