that’s one thing we discussed in my social informatics course that I wish was discussed more outside of like.. graduate level information science courses–
Companies that run online forums literally decide how much and what kind of racism/homophobia/misogyny/transphobia/ableism they will allow. They make the decision usually based on economics– if it generates content through controversy, they’ll probably allow it through. If it makes people boycott the company or brings severely bad press to a mainstream audience, they’ll probably take it down.
Commercial content moderators are hired to remove objectionable content, and that is dependent on two things:
- Users reporting content to draw attention to it
- Company policy about what counts as objectionable
The internet isn’t “just like this.” It’s this level of racist because this level of racism is profitable for the company. It might shift with changing social norms, but it’s an unsympathetic economic line being drawn, not one based in consideration of harm towards users.
Two relevant readings from the course which I encourage those interested to read:
Nakamura, L. (2015). The unwanted labour of social
media: Women of colour call out culture as venture
community management. New Formations (86), 106.Roberts, S.T. (2016). Commercial content moderation:
Digital laborers’ dirty work. In Noble, S.U. and Tynes, B.
(Eds.), The intersectional internet: Race, sex, class and
culture online (pp. 147-159). New York: Peter Lang.