James Noguera

View Original

Free Speech and Social Media

Credit: Tracy Le Blanc

Free Speech
Free speech is the right that enables all other rights. Without the ability to speak our minds freely, we would not be able to effectively shape this democracy. The Founding Fathers knew this. It’s the First Amendment for a reason. Breaking free from a monarchy, it’s unsurprising they chose this freedom as a means to protect citizens from government persecution. In other words, if you say something the government doesn’t like, it can’t then retaliate against you. That’s the way it’s supposed to work, anyway. As a result, we can voice our dissent from public policies or government officials. If we’re right, over time, more people will agree with us than not. In this way, we can have a true democracy. 

Unprotected Speech
Freedom of speech, however, does not give you the right to endanger others. You cannot yell “Fire!” in a crowded theater, for example, because of the clear and present danger; it would cause panic and people would get hurt. It is not your right to say absolutely anything you want. So, the spirit of the First Amendment is that this power cannot be abused in such a way as to directly threaten others’s well-being. 

Also, freedom of speech is not protection against criticism. Sometimes prominent individuals who receive backlash over public statements say their free speech is being violated when, really, they’re just being criticized. Clearly, we need to be able to express ideas openly in a democracy. Part of that, though, is that we should also be free to criticize ideas frankly. (I prefer to criticize bad ideas, not people. I find ad hominems generally don’t help further conversation, only to instill in participants an idea of struggle: one will win, one will lose. I want to contribute to the growth of society.) It’s important for good ideas to win, and bad ideas to lose. 

Social Media
Social media complicates all this. A social media company does not only create its own extension of reality to the digital world, it magnifies voices; it curates content; it creates and enables. In this way, social media companies bear some responsibility when speech is used to directly harm others on the platform. The difficult part is, what should they do? And when should they do it? Let’s use an example. If some online group decides to launch an online bullying campaign involving thousands of people against some young person, which involves racist and derogatory comments, what should be done?  Ban[ Part of the problem is that, within the borders of Twitterstan, the executives are the ruling elite. They have a God-like role. ]ning those individuals from using the platform in this case seems like the right thing to do. Life, though, is often more nuanced, so a case-by-case approach makes sense. But, now, that means that someone has to make the necessary decisions. In the real world, we have a justice system. There’s evidence, a trail, arbitration, judges, and so on. Currently, most of the online somehwat analogous processes occur behind the green curtains of Zuck, or, as seems likely, in the future, of Musk. That’s a lot of power, and obscurity. So, when people get banned, narratives quickly propagate alleging bias. It doesn’t help such allegations that so many of these Big Tech companies are overwhelmingly liberal.

You could argue, “Hey, that’s the free market! If all of these successful Big Tech companies are liberal, then those ideals are the ones that most people agree with and think are right!” The problem, though, is that many of these companies regularly engage in anti-competitive practices and have reached near monopolistic or duopolistic status. It’s hard to self-regulate because of pressure from the public when you don’t actually feel that pressure: because either your consumers have no alternative (think Comcast) or because they’re addicted (think Big Tobacco). Much of social media embodies both of these problems. Twitter, for example, currently has between 200 and 300 million mDAUs. So, it’s hard to say to some dissenter, “Hey, use something else.” Further, social media addiction in young people, paragons of the problem facing Americans, has been  increasing for some years.

Then there’s government regulation. To the capitalist, government should have a light touch, if any, on the operations of private enterprises. For the free market to work, it needs to be free. But the market can’t be truly free. We set some limitations. Slavery isn’t a thing, anymore, for instance. We know people make bad choices (remember 2016?). We also know that social media algorithms and current attention-driven incentives make it harder to even make choices, and harm our long-term mental and emotional health, especially for young people, as former design ethicist at Google, Tristan Harris, brilliantly argues

So, to me, it’s clear that A) we need companies to have some form of autonomy for the free market to function. They need to set their own rules. And because of that, they bear responsibility, too, for what happens on their platforms. And B) there needs to be some government regulation to prevent people from intentionally and directly harming others. 

None of these things is easy. What is a company’s responsibilities and what’s a government’s? What about hate speech? What about misinformation? How is that defined? What constitutes harassment or bullying? Should all these things be equally addressed? And what do you do with offenders? 

Solution?
What would I do if I ran Twitter, say? First, let me acknowledge that there are smarter people than me working there already and who will be working there in the future. They have thought about this for longer than I have. Generally, I believe in expertise. If 9/10 doctors or researchers hold a particular opinion about something within their field, I value that. I tend to listen to that. So that’s just an acknowledgement that I don’t have all the answers. I also think that, over the long term, the free market, assuming it is working right, meaning it is generally fair, tends to be on to something. We see, over time, better strategies/approaches win out. That’s just a bit of optimism from me. 

But this isn’t a cop out. I agree with Elon Musk in this regard: Free speech principles are good principles. A social media platform, like Twitter, should try to emulate the First Amendment as much as is reasonably possible. While acknowledging all of the potential problems above, I think we should largely be able to say what we want to say, no matter the politics, religion, or ideology. In addition, if our ideas suck, people should be free to say so. I think it’s more productive if those who do so, do so in a respectful and helpful manner. That way we all benefit and are less likely to become more atomized.

And where do we split the responsibilities? What is a private company’s role? What is the government’s role? I believe young people should be protected from being targeted online. I also think that overt racism should be banned online. A platform shouldn’t help you say derogatory, racist shit. Misinformation is more difficult. For me, it depends on how you define it. If you are knowingly spreading false information, and there is clear and present danger in spreading said misinformation, then, yes, I think that’s a problem that should be addressed, in proportion to the offense.

I think the government should probably focus on the spirit of the First Amendment, which is to protect others from harm - that is, specific claims of violence. Everything else should probably be within a company’s purview. Some will not agree with me on what problems should be addressed and by which party. But I do think the free market, in the long run, will help us get to a better place.

I’m curious how Musk will change Twitter, how he will address these issues. I don’t envy him in that.