Police? Please come to Facebook!

Jack Kotarbinski, PhD
5 min readAug 24, 2024

--

Facebook is a place where absurd bans are as frequent as cat photos and game invitations you never wanted. Over the years, users have learned that Facebook’s algorithms can sometimes react like an overly sensitive teacher on playground duty — or as if they’ve wholly misplaced the manual for common sense.

Once upon a time, there were discussion lists — online spaces where the true adventure of exchanging ideas began. Back then, bans weren’t handed out by a soulless algorithm but by a natural person of flesh and blood sitting on the other side. And while moderation on those lists had flaws, it was at least… human.

The admin on discussion lists was an almost mythical figure. On one hand, they were the guardians of order, and on the other — comrade-in-arms in the battle against internet trolls. You could write to them, explain your case, or even ask for a second chance. Sometimes they understood, sometimes they didn’t, but at least you knew someone on the other side drank coffee in the morning, not just electricity for machine learning.
When you got banned on a discussion list, you could always try to explain to the moderator that the joke about “dropping a bomb” was only about a presentation, not something more sinister. The moderator might understand, but at least you had a chance to communicate. With an algorithm, there’s no conversation — it only knows zeros and ones.

Moderators sometimes react to situational humor — one day, you might be banned for using the word “clown,” and another day, that same moderator might make you laugh with their joke about clowns. It all depended on whether the moderator was having a good day or had just gotten a parking ticket. After all, the phrase “I know where you live” could one day be considered a criminal threat and another — a confirmation of a date.

One unforgettable element of bans on discussion lists was the possibility of appealing the decision. By writing to the moderator, you could plead for mercy, swearing that you wouldn’t post links to your cookie recipe blog in a thread about cryptography again. If the moderator was in a good mood — miracles could happen!

Every moderator had specific rules, which they sometimes didn’t fully understand. You could be banned for swearing, while the moderator allowed heated debates about the superiority of one operating system over another, where the language was… colorful. That was the charm of those times — you never knew when you might break some unwritten rule. Those days are over.

The online world of AI Admins is where linguistic nuances disappear faster than memes. Machines, programmed to protect against hate speech and inappropriate content, often resemble a tourist in a foreign country who only knows a few basic phrases from a phrasebook. The results? Sometimes, it’s more comedic than frustrating.

Take the case of sauerkraut. A friend posted about his favorite recipe for German sauerkraut, adding the enthusiastic comment: “Germany above all!” — meaning, no one does it better. What did our beloved algorithm understand? A political slogan. A culinary recipe became the reason for a ban.

AI models are masters at ignoring context. Someone writes a post about their gardening problems: “I’m going to kill these weeds; they’re driving me crazy!” Clear as day to a human. To the algorithm, a red light flashes, and a siren wails because it reads and translates the word “kill.” The user gets banned for making threats.

There’s nothing like trying to explain yourself in English. Try writing, “Sorry, it was just a joke, mate!” after you’ve been banned. The algorithm, which translates a language like Google Translate in turbo mode, might understand it as “Sorry, I was just a bomb, mate.” Now you’ve got another problem.

Examples abound: from being banned for using the word “kill” in the context of killing time to being barred from posting pictures of smooth walls because the algorithm interpreted it as “nudity.” In this increasingly bizarre world, every post is like roulette — you never know if your innocent words will be turned into something entirely different.

During the Nazi occupation of Poland, the language of metaphors became one of the most powerful tools used by the Home Army and other members of the resistance. It was a language full of symbolism, hidden meanings, and codes that allowed for conducting covert operations while avoiding detection by the occupier. Information was often encrypted in simple, everyday phrases with more profound, strategic significance for those in the know. For example, when someone spoke of a “wedding on Saturday” it meant a military operation was planned for that day. The phrase “more bread is needed” could mean that the unit needed more ammunition for an action. Someone asking about the “number of apples in the orchard” was inquiring about the number of grenades available in a given unit. “Fishing” meant acquiring weapons, often in supply operations or attacks on the occupier’s warehouses.

In modern Poland, we often use metaphorical language during every parliamentary election when we ask about “prices at the market.” Everyone knows it refers to unofficial election polls, but no one breaks the election silence. Of course, the results vary greatly, but the fun is always excellent.

Facebook banned me for 3 days for a photo I posted exactly… 12 years ago. Most likely, there was an algorithm update. It happens. I’ve received warnings for various things, like music playing in the car during a livestream. That’s not the problem. The issue lies elsewhere. We have to get used to using digital media, where we are first punished with a ban for innocent content, and complaints are pointless.

In 2020, after the terrorist attack in New Zealand, a video of the attack, showing the killing of innocent people, was live-streamed and remained available on Facebook for a long time. Although it was one of the most brutal cases of live-streaming, Facebook initially didn’t react quickly enough, causing a wave of outrage worldwide. Often, videos showing cruel treatment of animals, such as beating or killing, were reported by users but weren’t removed because the algorithm deemed they didn’t violate community guidelines. These types of content often provoke strong emotions and opposition, but despite numerous reports, they sometimes remain online for a long time. During the COVID-19 pandemic, some content spreading false information about the virus, vaccines, or government actions was considered acceptable by Facebook. Even though it had the potential to harm public health, the platform initially didn’t take action to remove it, leading to widespread misinformation.

People moderated discussion forums in the past, allowing for a more personalized approach and the possibility of clarifying misunderstandings. Today, social media algorithms, which cannot understand context, often punish users for completely innocent content while ignoring more severe violations. The examples of absurd bans show how machines are sensitive to specific words but fail to grasp their real meaning.

Read my blog in English

See My Amazon

--

--

Jack Kotarbinski, PhD
Jack Kotarbinski, PhD

Written by Jack Kotarbinski, PhD

PhD, European economist, keynote speaker, best-selling author, digital influencer, blogger, entrepreneur kotarbinski.com See Amazon: https://tiny.pl/0dny136z

No responses yet