Facebook, YouTube, Twitter and Co. can no longer hide their problems in a black box

0

Stories about the inscrutability of AI have been exaggerated. Big tech should prepare for regulators looking deep into their platforms in the near future.

There’s a good reason to unravel the mysteries of the social media giants. For the past decade, governments have watched helplessly as their democratic processes have been disrupted by misinformation and hate speech on sites such as Meta Platforms Inc.’s Facebook, Alphabet Inc.’s YouTube, and Twitter Inc. Now some governments are preparing for a quid pro quo.

Over the next two years, Europe and the UK are preparing legislation that will curb the problematic content that social media companies have gone viral. There was a lot of skepticism about their ability to look under the hood of companies like Facebook. Finally, regulators lack the technical know-how, manpower, and salaries that Big Tech can boast. And there’s another technical catch: the artificial intelligence systems that tech companies are using are notoriously difficult to decipher.

But naysayers should remain open. New techniques are being developed that will facilitate the study of these systems. The so-called black box problem of AI is not as impenetrable as many think.

AI powers most of the actions we see on Facebook or YouTube, and in particular the recommendation systems that indicate which posts to include in your newsfeed or which videos to watch next – all to keep you scrolling. Millions of pieces of data are used to train AI software, allowing it to make predictions roughly similar to those made by humans. The hard part for engineers is understanding how AI makes a decision in the first place. Hence the black box concept.

Consider the following two images:

You can probably tell within a few milliseconds which animal is the fox and which is the dog. But can you explain how you know that? Most people would find it difficult to articulate what is going on with the nose, the ears, or the shape of the head, telling them which is which. But they know exactly which picture shows the fox.

A similar paradox concerns machine learning models. It will often give the correct answer, but its designers often cannot explain how. That doesn’t make them completely inscrutable. A small but growing industry is emerging that oversees how these systems work. Her favorite task: improving the performance of an AI model. Companies that use them also want to make sure their AI isn’t making biased decisions when, for example, it’s reviewing job applications or making loans.

Here’s an example of how one of these startups works. A financial firm recently used Israeli startup Aporia to see if a student recruitment campaign was working. Aporia, which uses both software and human reviewers, found that the company’s AI system was actually making mistakes, granting credit to some young people it shouldn’t have, or withholding credit from others unnecessarily. When Aporia took a closer look, it found out why: Students made up less than 1% of the data used to train the company’s AI.

According to Liran Hosan, Aporia’s Chief Executive Officer, the AI’s black box’s reputation for impenetrability has been exaggerated in many ways. With the right technology, you can—potentially—even unravel the ultra-complicated models of language that underlie social media companies, in part because in computing even language can be represented by numeric code. Figuring out how an algorithm is spreading or not fighting hate speech is certainly more difficult than spotting errors in the numerical data representing credit, but it is possible. And European regulators will try.

According to a European Commission spokesman, the forthcoming Digital Services Act will require online platforms to undergo an annual audit to assess how “risky” their algorithms are for citizens. This can sometimes force organizations to provide unprecedented access to information that many consider trade secrets: code, training data, and process logs. (The commission said its examiners were bound by confidentiality rules.)

But let’s assume Europe’s watchdogs couldn’t deal with Facebook or YouTube code. Suppose they couldn’t examine the algorithms that decide which videos or posts to recommend. There’s still a lot you could do.

Manoel Ribeiro, a Ph.D. A student at the Swiss Federal Institute of Technology in Lausanne, Switzerland, published a study in 2019 in which he and his co-authors tracked how certain YouTube visitors were radicalized by far-right content. He didn’t need to access YouTube’s code to do this. The researchers simply looked at the comments on the site to see which channels users were visiting over time. It was like tracking digital footprints — tedious work, but ultimately it revealed how a fraction of YouTube users were lured into white supremacist channels via influencers who acted like gateway drugs.

Ribeiro’s study is part of a broader body of research that has tracked the psychological side effects of Facebook or YouTube without needing to understand their algorithms. While they offer relatively superficial perspectives on how social media platforms work, they can still help regulators impose broader obligations on platforms. These can range from hiring compliance officers to ensure a company is following the rules, or providing auditors with accurate spot checks on the type of content people are being driven to.

That’s a radically different perspective than the secrecy that Big Tech has been able to operate in until now. And it will involve new technologies as well as new policies. That could well be a winning combination for regulators.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for The Wall Street Journal and Forbes, she is the author of We Are Anonymous.

Share.

Comments are closed.