You might have read the news this morning that Apple has banned Alex Jones and Infowars from their podcasting platform. They join Facebook, Spotify, and YouTube in tossing this material off their distribution channels. Some of you will see this as a political move, stifling free speech. I don’t want to look at it that way today. Instead, I’d like us to focus on some business issues.
If you’re not familiar with Mr. Jones, he’s a conspiracy theorist who has claimed, among other things, that the murders at Sandy Hook Elementary School were staged by paid actors and that the government is poisoning children to make them gay. Do you remember a guy walking into a pizzeria with a gun to free the children being held there as part of a sex ring? An Alex Jones listener, who heard that the Clintons were running the ring on Alex Jones’ program.
Following the ban, some folks are yelling about freedom of speech and the First Amendment. Sorry folks. Some speech is not protected. I can’t make things up about a product and knowingly advertise false information. I can’t yell “fire” in a crowded theater. The most relevant type of speech that’s not protected is this:
Government may prohibit the use of “fighting words,” which is speech that is used to inflame another and that will likely incite physical retaliation. Likewise, language that is meant to incite the masses toward lawless action is not protected. This can include speech that is intended to incite violence or to encourage the audience to commit illegal acts. The test for fighting words is whether an average citizen would view the language as being inherently likely to provoke a violent response.
That’s exactly why this material was banned. It violates the platforms’ terms of service. Frankly, it disappoints me that it’s taken so long and it raises a business point we all need to consider.
Section 230 of the Communications Decency Act protects platforms from liability when people publish on their platform. This prevents me from suing a platform when a third-party writes something completely false about me, and it’s a great idea. The problem is that too many platforms hide behind this, feeling as if they begin moderating the obviously false or hateful content that they might, in fact, become liable. In doing so, they open the platform up to become a megaphone for hate and disinformation. Most importantly, it damages their reputation and turns off users. Look at what has happened with Twitter. The word I hear most often when people describe it is “cesspool.” To their credit, Twitter management is acting to clean it up (finally) but a lot of damage has already been done.
Any of us in business need to do more to protect our brands and businesses than the minimum legally required amount. Being corporately responsible is proactive. Remember that there are other channels through which Mr. Jones or any other content provider can distribute their information. That doesn’t mean I have to allow him or anyone else into mine, just as you don’t need to permit anyone into your retail store who you find potentially troublesome – a suspected shoplifter, for example – as long as it is not based on bias against a federally protected class of people. I need to be clear about that to my users (we don’t welcome hate speech or knowingly false information here in your terms of service, perhaps). Most importantly, I need to be responsible and do the best I can to do the ethically correct thing. Not because I dislike what it is you have to say, but because it’s a hate-filled lie.