How can tech fix media distortion

How algorithms manipulate us

Then the first person told the next about the articles, who then reported to the third subject, and so on. The proportion of negative facts increased along the chain - an effect known as an increase in social risk. In addition, the work of Danielle J. Navarro and her colleagues at the University of New South Wales, Australia, found that the information communicated in social diffusion chains is most dependent on individuals with extreme bias.

Worse still, social diffusion makes negative attitudes more resilient. When Jagiello confronted the participants with the original articles after the attempt, it hardly reduced their negative point of view. In 2015, OSoMe researchers Emilio Ferrara and Zeyao Yang analyzed empirical data on such an "emotional contagion" on Twitter. At the same time, people who were overly exposed to negative content also tended to share negative posts. If, on the other hand, people read more positive things, they published more positive posts. However, since the negative spreads faster, the emotions of users can be easily manipulated: for example, by passing on texts that trigger reactions such as fear. For example, Ferrara, who is now doing research at the University of Southern California, and his colleagues from the Bruno Kessler Foundation in Italy have shown that bots spread violent and inflammatory posts during the Spanish referendum on Catalan independence in 2017, thereby increasing social conflict tightened.

Automated bots exploit cognitive loopholes and thereby impair the quality of content. They are easy to develop: Social media platforms have interfaces that allow individual actors to set up thousands of bots. In order to expose such accounts, we have developed machine learning algorithms. One of the programs, the publicly available »botometer«, extracts 1200 characteristics from Twitter profiles in order to examine their connections, social network structure, temporal activity patterns, language and other properties. It compares these with those of tens of thousands of already identified bots. This means that you know for each account the probability that it is automated.

Botometer

The botometer is a machine learning algorithm that estimates the likelihood of a social media account being automated. It delivers a number between zero (human) and one (bot). As with all self-learning programs, how reliable the result is depends heavily on the data sets: If the sample data with which you trained the AI ​​deviate too much from the accounts you want to analyze, the assessment may be wrong. It is also up to scientists to interpret the results obtained. Among other things, they have to set a justified limit, around 0.75, above which they can identify an account as a bot.

We estimate that up to 15 percent of active Twitter profiles in 2017 were made up of bots - and that they played a key role in spreading misinformation during the 2016 US election. Within seconds, thousands of programs can publish false news, such as Hillary Clinton being involved in occult rituals. In view of the apparent popularity, human users then also share such content.

In addition, bots can influence us if they pretend to be people from our group. The program just needs to follow people from the same community, like and share similar content. OSoMe researcher Xiaodan Lou simulated this behavior in a model. In it are some agent bots that mingle in a social network and exchange numerous inferior posts. In the simulations, the bots can effectively reduce the information quality of the entire system, even if they only infiltrate a small part of the network. They also accelerate the formation of echo chambers by suggesting other untrue reports to follow.

Unmask automated user accounts

Some manipulators use fake news and bots to drive political polarization or monetization through advertising. At OSoMe, we recently uncovered a network of Twitter automated accounts, all coordinated by the same entity. Some pretended to be Trump supporters of the Make America Great Again campaign, while others pretended to be Trump opponents - but all asked for political donations.

How can we better protect ourselves from such manipulation? First of all, we need to know our cognitive biases and understand how algorithms and bots exploit them. OSoMe has developed several tools to highlight human vulnerabilities as well as those of social media. One of them is an app called "Fakey," which helps users learn to spot misinformation. The app simulates a social media news feed that shows current articles from sources with low or high credibility. Users have to decide which content they would share or check for truthfulness. When we analyzed the data from “Fakey”, we noticed, as expected, the social herding: users are more likely to trust information from questionable sources if they believe that many other people have shared it.

Another publicly available program, "Hoaxy", visualizes how posts spread via Twitter. The representation consists of a network in which the nodes correspond to real Twitter accounts, while the connections represent the sharing of content between users. Each point has a color that reflects its score from the »botometer«. This makes it easy to see how bots increase the spread of fake news (see »Pollution by Bots«).

Investigative journalists have used our programs to uncover misinformation campaigns, such as the "Pizzagate Conspiracy," according to which a pizzeria in Washington, D.C., is involved in child pornography, including Hillary Clinton. They were also able to expose bot-driven efforts that were intended to deter certain people from voting during the 2018 midterm elections. Because machine learning algorithms are increasingly able to mimic human behavior, it is difficult to detect such manipulations.

Aside from spreading false news, misinformation campaigns can divert attention from other, sometimes weighty, problems. To counter this, we recently developed the »BotSlayer« software. It extracts hashtags, links, accounts and other content that appears in tweets about topics that interest a user. This allows »BotSlayer« to identify accounts that are trending and are likely to be reinforced by bots or coordinated accounts. The aim is to enable reporters, civil society organizations and political candidates to identify and track fake influence campaigns in real time.

While these tools are useful tools, further steps are needed to curb the spread of fake news. Education is an important factor, even if one cannot adequately cover all scientific content in school. Some governments and social media platforms are trying to crack down on manipulation and fake news. But who decides what is manipulative and what is not? The risk that such measures could intentionally or inadvertently suppress freedom of expression is considerable. Twitter has also set limits to automated posting.

Another option could be to create barriers to creating and sharing poor quality content. For example, a price could be set for sharing or receiving information. The payment does not necessarily have to be monetary, but could be in the form of time or intellectual work such as puzzles. Because free communication has its price - and the fact that we have reduced costs has reduced its value.