Skip to Content, Navigation, or Footer.

Steinman '19: Code can’t cure all

Pre-roll advertisements on YouTube, the ones that play before your desired video, are generally understood as one of the most annoying types of online advertisement. Sometimes it is because they are disproportionately long for the video you’re trying to watch, or because the advertised product is grossly incongruent with the content of the video, like news coverage of a terrorist attack or natural disaster. Disapproval of pre-roll ads is near universal: It is estimated that 94 percent of viewers click the “skip ad” button when they are given the chance. As such, it feels strange to defend them. But as inconvenient as they are, it is the advertising money that keeps independent YouTube broadcasters afloat. Unlike television advertising, ad placement on YouTube is not determined by the advertisers themselves or by a network, but by an algorithm delivering content to more than a billion users every single day. And on a website whose founding slogan in 2005 was “Broadcast Yourself,” the makeup of that algorithm has dangerous implications for free speech.


Understanding the sheer size of YouTube’s operation underscores the importance of what this algorithm does. Every minute, another 400 hours of content are uploaded; it would take a 24/7 staff of 24,000 to watch and monitor every new addition, a job I would wish upon no one. With “thousands of sites” signing on as advertisers each day and 1.7 billion so-called “bad ads” removed in 2016 alone, the lines of code that govern ad placement on YouTube are responsible for an almost unthinkable number of decisions that, combined, curate the content that eventually reaches viewers.


Last month, YouTube, which is owned by Google, changed this algorithm, pulling ad dollars from all “potentially objectionable content,” resulting in a wide range of YouTube stars, from a progressive radio host to a firearm review channel, who watched their funding disappear overnight. The New York Times described the move as “abrupt” and “vague”  — while Google billed it as “expanded safeguards for advertisers.” To be sure, advertisers deserve the absolute right to choose what type of content they attach to their brand and reputation, as seen with the mass exodus of advertisers from sites like Breitbart. The rationale behind that boycott, and many others, is a guiding principle of web economics: For website owners, advertisers and sponsored content are power. Without advertiser money, it is hard for anyone online to gain traction as a writer, musician, comedian or artist, which is why the flow of YouTube’s advertising dollars is so integral to its function.


In late March, the Wall Street Journal reported that major brands like Pepsi, Walmart, Starbucks, General Motors and FX Networks were suspending advertising from YouTube or from the entire Google universe in response to their ads appearing on controversial videos. A company with a conservative CEO might well have valid objections to its advertisements appearing on a left-wing political talk show, and a company whose leadership believes in gun control might feel the same way about the firearm channel. Similarly, it is good to see companies like Google taking some measure of responsibility for the extremism that can flourish on their platforms. But these are personal, human decisions, not ones to leave up to an algorithm. Whatever your feelings about gun control, the testing and reviewing of weapons for informational purposes is not hate speech, and no reasonable human judge could mistake it for hate speech. Misclassification on that scale can only be the fault of erroneous computer programs which, despite increased sophistication, can never take offense at any kind of speech, and so must rely on overly broad and occasionally bizarre metrics to weed out hate. Once code is made political, the ramifications are enormous, both in terms of attracting an increasingly partisan and divided user base and in terms of redefining the boundary between fact and opinion.


This reliance is dangerous and dystopian, portending an increasingly plausible future in which an algorithm can decide what is good or bad for us to view through basic capitalism. As the disturbing saga of the so-called Facebook Live murder that played out this week demonstrated, we are in completely uncharted territory when it comes to the ethics of online shareable content. But handing control of our standards of decency over to artificial intelligence will only plunge us further into the dark.


Clare Steinman ’19 can be reached at clare_steinman@brown.edu. Please send responses to this opinion to letters@browndailyherald.com and other op-eds to opinions@browndailyherald.com.



ADVERTISEMENT


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.