๐Ÿ˜ฑ When Hashtags Turn Horrific: Instagram’s Algorithm Unintentionally Fuels Dark Networks ๐Ÿ“ฒ๐Ÿšซ

TL;DR: Instagram’s algorithm landed in hot water as it allegedly enabled and even promoted a disturbing “child-sex material” network. How? By permitting searches using hashtags linked to child abuse. This major hiccup has left users and observers wondering, “Just how safe are we in the virtual world?” ๐Ÿค”

Imagine the shock when you discovered that your favorite pizza delivery app was secretly recommending the best locations to buy black-market diamond-encrusted pizza cutters. ๐Ÿ˜ฑ No? Too obscure? Now imagine if that was not so innocent and instead involved child exploitation. Not laughing now, are we? This is the unsettling reality that recently surfaced on Instagram, the photo-sharing platform where cute puppies and mouth-watering burgers usually rule supreme.

An alarming report has shone a spotlight on Instagram’s recommendation algorithms that unintentionally gave a thumbs-up to a “vast pedophile network.” It’s as if the algorithm said, “Hey, you’re into reprehensible illegal content? No judgement, here are some options!” ๐Ÿคฏ The report’s findings, brought to us by the ever-curious minds at Stanford University and the University of Massachusetts Amherst, paint a picture thatโ€™s far from those cute #catsofinstagram posts.

These academic Sherlock Holmes-ian duos found that Instagram users could search using child abuse-related hashtags like #pedowhore, #preteensex, #pedobait, and #mnsfw (an unsettling acronym meaning โ€œminors not safe for work”). Talk about a serious case of digital indigestion. ๐Ÿคข

These alarming hashtags led users to accounts selling illicit materials, offering “menus” of content, or even arranging “meet-ups.” (Here’s where you insert the disgusted face emoji.) ๐Ÿคฎ

Even Sarah Adams, a Canadian social media influencer who spends her time calling out online child exploitation, found herself ensnared by Instagramโ€™s recommendation algorithm. One of her followers reported an account named โ€œincest toddlersโ€ (cue the collective shudder) that was filled with โ€œpro-incest memes.โ€ Adams interacted with the account just long enough to report it, but lo and behold, Instagram started recommending the account to those who visited her page. Cue Adams saying, “Wait, that’s not what I meant!” ๐Ÿ˜ฐ

Meta, Instagram’s parent company, confirmed the violation and noted that it had restricted thousands of additional search terms and hashtags on Instagram. The company pointed to its robust enforcement efforts, stating it had disabled over 490,000 accounts and blocked more than 29,000 devices for policy violations just within a week. Not to mention, it took down 27 networks that spread abusive content from 2020 to 2022. ๐Ÿ’ช

Even with these efforts, researchers found “large-scale communities promoting criminal sex abuse” on Instagram. Alex Stamos, the head of the Stanford Internet Observatory and Metaโ€™s former chief security officer, believes this discovery should set off alarm bells at Meta. Stamos has called for Meta to “reinvest in human investigators.” Is this the signal for tech giants to realize that not everything can be solved by algorithms? ๐Ÿค”

Adding a cherry on top of this messed-up sundae, Instagram had a peculiar feature that warned users about certain searches leading to “images of child sexual abuse.” But get this โ€“ it gave users the option to either get resources on the topic or “see results anyway.” Seriously, Instagram? ๐Ÿ˜ณ

As the scrutiny intensifies on Meta and other social media platforms for their content policing efforts, one can’t help but wonder how much the internet giants are doing