Creation tools & misinformation
The trend in technology that’s most obvious is the proliferation of creation tools that allows the formerly 98% of internet “consumers” to break into the 2% of internet “creators.” TikTok has made shortform video editing and publishing feel incredibly effortless, whereas just ten years ago, you’d have to sift through dozens of video files from your camcorder’s SD card and stitch them together in Windows Movie Maker while ensuring audio and video are in sync as YouTube slowly processed the upload. The delta from when you have the idea of making a video to actually doing the task of recording the video to uploading it used to take several hours of planning, preparation, and execution. TikTok has shortened those several hours into a few swipes and taps that can be done in less than five minutes.
Twitter did the same thing with microblogging. It’s no secret that when Twitter was originally created, they didn’t quite know what it would be used for or if it would really take off or not. It was a step away from incredibly longform content into short, digestible snippets that could be read at a glance. The side effect of this was that typing out 140 characters and hitting “Tweet” was far less intimidating than staring at a blank Word document where you were about to pen down your thoughts on a specific topic. By allowing users to quickly curate short sentences and letting them reap the rewards of public engagement from it, Twitter indirectly let users feel more comfortable sharing their thoughts with anonymous strangers.
A very recent example is the “video game” Dreams, which is quite literally a game engine. Except instead of writing code, you use a 3D worldspace with tons of visual connectors and editors to create your game. Dreams even markets itself as a “creation engine.” You can make anything in here from simple 2D platformers to highly complex 3D narrative games with branching paths and intermixing logic. It only came out in its full release last year, so there’s a while to go to see what really comes out of it, but it’s a great example of how it removes the barrier to entry for game creation. Anyone who has an idea in their minds can start prototyping and building it out in Dreams with no knowledge of a programming language.
Needless to say, I’m really happy for creation tools. They take a task that typically involves a strenuous or annoying process which acts as a gatekeeper from letting others step into the creation realm, and convert that into a highly accessible seamless experience that opens up a world of opportunity for those who couldn’t access it before. The big debate right now about the widespread reach of these creation tools, however, is the misinformation that it can generate.
Social media has been under heavy fire lately for not doing enough to limit hate speech on its platforms. If you let it grow unchecked, hate speech eventually turns into violent speech from the people who spew it as they try to push the boundaries of the system to where its limits are. Stepping in and banning accounts for glorifying violence is already too late. The misinformation has already done the damage of radicalizing everyone who looked at the hate speech and, if left to simmer in the minds of the readers or viewers, will naturally progress towards acts of violence. This is why Twitter’s “ban” on suspending accounts that glorify violence but letting those who spread hate speech because they’re “protected” under First Amendment rights make absolutely no sense. It’s like allowing the seed to grow into a tree but deciding to draw the line when it starts bearing poisonous fruit. You knew what the seed was and you knew what tree it was growing into, so don’t act surprised when the fruit it bears is toxic.
So much of the problem is also with how the platforms split its content amongst its micro-bubbles. Jack Dorsey himself has said that “American political Twitter” is but a small microcosm of the larger Twittersphere where a lot of good is happening, but that doesn’t mean you can ignore the most dangerous parts of your community. Imagine a zookeeper saying “Yeah, the lions bite off a finger of mine every time I feed them, but the rest of the animals are incredibly well behaved, so overall it’s really great.”
Popularized by Facebook, Twitter uses likeness-matching algorithms to serve you content that people you follow and others in your “bubble” also tend to like or engage with. This cuts you off from those other larger Twitterverse conversations that apparently only Twitter employees can see and get a broader sense for. Worse, a lot of highly specific things happening in your bubble can get quote-tweeted by someone outside your bubble and taken completely out of context, twisting the actual tweet into a frenzied version of its original intent backed by nothing but hysteria and virality.
TikTok’s community benefits from skewing much younger in age, where most of the folks on the platform recognize and appreciate the same type of satire and clapbacks. It still has the same microbubbles that it filters you into, and it still suffers from people being breadcrumbed into misinformation bubbles, where if you see one QAnon video and like it, you’ll quickly buy into the conspiracy theory over the next few days as your entire feed just becomes QAnon videos. You may see two thousand TikToks spreading this lie, and each of them have over twenty thousand likes, which traps your brain in this inescapable confirmation bias that everybody in the world must believe in this, so it must be true.
Let’s dial back real quick to the early internet, because this problem wasn’t invented with social media. In the late 90’s and early 2000’s, we had the magic of phpbb forums. Folks from all over the world would find the forum by googling for something highly specific, like Harry Potter fanfic or competitive pokémon battling. Because these are typically niche interests, chances are that people who gravitate towards these communities don’t know many people in real life who share these passions, so they make an account on the forum and start chatting up the members. Soon, they’re logging in every day and seeing what the general conversation is all about. This was honestly a magical, wonderful time of the internet.
But misinformation still existed. People would post lies, conspiracy theories, and all sorts of scams. And they got taken down by moderators. Moderators. Regular longstanding members of the community with a good reputation who will do their best to ensure that the forum’s guidelines were being followed. The rules were the generic stuff, no insults or profanity, treat people with respect, no NSFW links, etc. There were people who would check every post and comment and approve them, which over time got better with users being able to report certain members, and eventually grew into automated flagging systems via bots. The point is, moderation existed. It wasn’t this “wild west” system of social media where anyone can post anything and automatically gets some sort of valid legitimacy out of it.
Every time the topic of “moderating” modern social media comes up, there’s the inevitable backlash of people on the other extreme saying that you can’t “silence free speech.” Well, please allow me to take this opportunity to use my free speech rights and loudly proclaim that moderation doesn’t mean censure. If your goal is to spread hatred and it’s against the community guidelines, you get kicked out of the community, no questions asked. That’s it. Social media giants didn’t have these guidelines or ways to enforce them for the longest time, until their hand was eventually forced by the uncontrolled growth of extremist content that spread virally.
I recall being part of so many online communities back in 2005 where a lot of teenage boys would get regularly kicked off of the forums for sending creepy private messages to female users. And that was it. Everyone in the community agreed that it was not okay to do that and was fine with it. We moved on. And yet, Twitter deals with this on a global scale today and still hasn’t solved the problem. When they ship features like Fleets or Spaces instead of dealing with problems like these, it sends a strong signal to the community about what they do and don’t care about.
Maybe a real world analogy is appropriate here. Think about how in 1800, if you wanted to spread hatred or lies or conspiracy theories, you technically could. You could go down to the town square and yell it at the top of your lungs. Everyone would look at you and think you’re crazy and shrug it off and get on with their day, especially in a busy city. But in every village or town, there’s at least two or three people who are batshit crazy like this. They didn’t really have the means to travel over to the other towns and form a Misinformation Coalition now, did they? They just went to the common areas, yelled out their thoughts, and left. A podcast I was listening to summed this up really well, where it said that social media has “allowed the village idiots to find each other,” and the companies are just outright ignoring the problem because solving it at the moment isn’t profitable.
Misinformation is really a sad reality of what the promise of the internet has devolved into. Simplifying complex tools and making the internet more accessible through easier creation tools is something that should be celebrated and cherished. We should be joyfully learning about cuisines from other parts of the world and enjoy being educated about the ancient rituals of a long-lost tribe. All of that stuff is still there, but it’s being drowned out by a sea of disillusion and lies that surfaces to the top, while the rest of us suffocate trying to reach it. The future of the free and fair internet is in great peril until we solve this issue, and it’s gonna take a lot more than what we’re currently doing to get there.