The following hierarchy attempts to classify different levels of content moderation (or censorship, depending on the context) in order to encourage a more productive discussion on how social networks ought to regulate content.

Six Levels of Internet Content Moderation

Level 1: Anything goes
Examples: email before spam filters, darknet networks

Level 1 networks have no content filtering whatsoever. Many networks start out this way during the development/testing phase. Level 1 networks must inevitably evolve to level 2 as they become popular in order to deal with spammers.

Level 2: Content which disrupts core functionality prohibited
Examples: Bitcoin, modern BitTorrent aggregators, some root-level domain extension (.com,.net),

Level 2 networks implement protections against behavior which disrupts core network functionality. For example:
* Email has added protocols to discourage and filter spam (IP blacklists, SPM, DKIM, CAN-SPAM)
* Although though The Pirate Bay flouts copyright law, it has strong protections against fake and mislabeled torrents.
* Each Bitcoin transaction cost a small amount of Bitcoin in order to discourage “dust” transactions (spam and DDOS attacks).
* Participants in Darknet markets such as the Silk Road review vendors in order to discourage fraud

Level 3: Illegal content prohibited
Examples: Internet infrastructure services such as Amazon Web Services, Cloudflare, GoDaddy, Google Cloud, some root-level domain extension (.edu,.gov)

Level 3 networks prohibit behavior and content which is illegal within their jurisdiction. For example, CloudFlare will provide DNS and firewall services to sites that are racist, homophobic, violate copyright law, etc, as long as they do not violate U.S. law.

Prohibiting “illegal” content can be tricky because the World Wide Web is global, but laws operate within a complex geographic system. For the most part, this results in prohibition of content which violates the laws of the region where the site is hosted.

Level 4: Abusive content prohibited (“Community standards”)
Examples: Facebook, Reddit, Twitter, YouTube, Pinterest, Google Play Store

Level 4 networks go beyond prohibiting content which is merely illegal and implement a set of (officially) uniform rules. These rules might ban “disruptive” or “abusive” behavior. The definition of what is “abuse” varies, but in principle, level 4 moderation is the freedom to express an opinion is respected as long as it is not “harmful” to another person or group. For example, on Reddit, you can say that you don’t like certain ethnic groups, but you would be banned for publishing the address or photo of a specific representative of “victim” groups. (Note: Individual subreddits are free to impose much more restrictive rules.)

Level 5: Unorthodox content prohibited (“Only politically correct content allowed”)
Examples: Facebook (?), Twitter (?), YouTube (?), Pinterest (???), Tea Party Community, RevLeft,

Level 5 networks go beyond level 4 in that they not only prohibit content which is intended to harm specific people or groups, but also certain ideas, views, theories or ideologies which are not harmful or abusive of a specific group.
Arguably, the examples above have started to engage in occasional level 5 moderation, while claiming to only apply level 4. For example, Facebook banned all links to a site which hosts plans to 3d printed weapons. Links to the site did not violate any law of specific Facebook rule (the links were banned as “spam”, even when posted to one’s own profile.) Twitter and Facebook often ban rude behavior and namecalling of minorities.
The recent ban of Alex Jones could be classified as level 5 moderation, though I’m not familiar with which specific content caused the ban

Level 6: Only high-quality content permitted (“Walled garden”)
Examples: New York Times, Apple App Stores

Level 6 networks are “whitelist” rather than “blacklist” networks and impose strict content filtering rules. For example, apps submitted to the Apple App Store must meet strict technical and content criteria, including bans on politically-sensitive or adult-only content. Newspapers and individual blogs are level 6 networks since they only publish content approved by their editors.


The above is meant to be merely a factual description of the categories of online moderation. Below are my opinions about the appropriate level of moderation for various networks. This level is dependent on factors such as the business model and specific use case of a given network. There is a lot more I could say on this topic, but I will conclude with a set of general principles:

Five principles for appropriate content moderation on the Internet

  1. Basic Internet infrastructure (HTTP,TCP/IP,SMTP,Bitcoin) ought to impose level 2 moderation. This is both for the above reason, as well as the fact that laws differ between nations and infrastructure is the wrong place to reach a consensus on which laws ought to be enforced. Attempting to impose regional laws on infrastructure leads to the lowest common denominator being enforced, which effectively happened when GPDR Compliance had to be applied globally or EU citizens banned because most online services do not have the technical means to selectively enforce it.
  2. There is a place for Level 2 communities. The Internet ought to have a place to allow content to be posted regardless of legal status. Actual law (what politicians legislate) and just law (laws that respect people’s rights) often differ and providing a space for corruption and malicious actors to be exposed is a valuable service of the Internet, even if comes with territories such as markets for stolen credit goods, child pornography, and others. It’s important to remember “darknet” networks are merely resistant to censorship and de-anonymization, not immune from law enforcement.
  3. Social networks need to decide if they are infrastructure or content moderators. If they are infrastructure, they ought to practice Level 3 moderation. If they want to present “the best content” to users, they ought to explicitly adopt level 6 standards.
  4. Level 5 moderation is not always wrong, but ought to be limited to ideologically-explicit communities. For example, religious and political groups (such as this site!) are entirely right to impose ideological restrictions on content. Broad-based social networks aimed at society large (Facebook, Twitter, YouTube) should not practice level 5 moderation.
  5. Moderation should always be explicit. I personally experienced the harmful impact of censorship disguised as technical errors in mainland China. It is disappointing to see similar practices now being done by Facebook when ideologically-sensitive content is marked as “spam” or links added to posts fail to render due to “technical error.”