Twitter was allowing advertisers to target users based on hate group keywords with ads. An investigation found that the data collected by Twitter about its users, data that is used to target ads at them, can wind up assigning people characteristics like “homophobic” or “anti-gay.” Once this was brought to Twitter’s attention, they apologized and quickly removed the potentially discriminatory terms from their analytics.
Social media websites monitor what users do on them. An idea behind this is to ensure what you see is relevant to your interests – ideally increasing the amount of time you spend on their site. Sometimes that information is used to market to you individually. Facebook famously sells packaged user data to corporations. Twitter is no different. The difference is that Facebook started by restricting its data, thus increasing its value through inaccessibility. Twitter presents its information as open. Its data is cheaper and less in-depth – in part because Twitter is represented as a fast, newsroom-type site, and Facebook marketed itself as an online log of personal life.
The BBC tested Twitter’s analytics by generating an advertisement targeting three audiences based on some potentially hateful keywords. Within a few hours, 37 users saw the ad, and 2 of them interacted with it. Then, they ran the same advert targeting a user base comprised of 13 to 24-year-olds who lined up with the keyword anorexia. This post was seen by 255 users and was projected by twitter to reach 20,000 individual accounts potentially. They also tested an anti-Islamic campaign. Twitter’s tool alerted them that it had the potential to reach around 100,000 users, but didn’t launch it.
Twitter’s Potential Hate Group Issue
Following the investigation, Twitter apologized and hastily removed these keywords from their system. This is a welcomed move, but ultimately, Twitter may still have a serious problem on their hands.
Twitter’s hateful conduct policy states that “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” The document goes over blatant infractions, but says, “We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities.” If Twitter were keeping to its content rules, the BBC’s ad wouldn’t have been projected to reach so many users. Twitter’s apology is appreciated, but this feels like sweeping a more significant issue with the site under the rug.
Next: Report: Twitter Users Overrepresent Extreme and Fringe Political Ideas, Not The Mainstream
Source: BBC