Unlike in the past, disinformation is now used in a targeted manner. Now you can find specific narratives that are being shared and mostly they are slightly consistent with things that are happening. For bots, which are automated accounts used for spamming and amplification of certain topics, they could be set to retweet conversations about certain stories or follow certain accounts. To better understand the matter, we spoke to Robin Kiplang’at, a senior investigative data analyst at Code for Africa.
How do we combat targeted disinformation?
We can do that by observing and uncovering, especially the trending topics, because in most conversations, things are not to be taken at face value. If you scratch the surface, you see things like coordination, reinforcement, and copy-paste behavior. To monitor coordinated inauthentic behavior, respecting the bridge between free speech and surveillance, you can report the content to a platform, which then disables or bans such accounts. That was the main strength that our team used because I don’t think we have any other executive powers. On our front, the proverbial naming and shaming was the only thing we used and provided evidence to the platform.
Can a regular user spot a targeted disinformation campaign?
< p class="align-- justify">Anything that doesn’t contain verifiable information is malicious. People are usually gullible when the subject is sentimental or responding to something that gets to their core. One would have to check and verify the authenticity of the mail. Since everyone has a phone we have delegated that responsibility to other people, you can use a few resources to spot bots and just take opinions as opinions and not necessarily research.
What challenge can this pose in influencing a political campaign?
One of the things I believe in is that we are shaped by our environment. People form beliefs when they can connect parts. For example, as a father and husband, I can persuade and influence my family in some way, which means I can make a difference in my community. On social media and because of the ripple effect my post has on the people around me (followers), if I manage to convince two or three friends against a certain person, then I think that’s the change that whoever is driving the campaign has achieved.
What does history tell us about disinformation and fake news?
The long-standing problem of political misinformation aroused after the 2016 US presidential election. Academics, journalists and politicians expressed concern that the spread of fake news could destabilize political institutions and delegitimize media organizations. Despite these widespread concerns, there is relatively little research examining the consequences of consuming fake news in the current political environment. While its direct impact on 2016 elections may have been limited, online misinformation could have other important implications for our society.
Fake news can also directly discredit the press , accusing them of bias, complicity and incompetence – or indirectly by contradicting a number of mainstream media claims. Additionally, the mere existence of online misinformation, akin to a journalistic product, can diminish the credibility of legitimate news. Our study confirms the relevance of these concerns and provides evidence that exposure to fake news is associated with a decrease in respondents’ media trust.