Argument: Misinformation and Biases Infect Social Media, Both Intentionally and Accidentally

33 Grammar Focus: Misinformation

This chapter focuses on the following grammar components found in the article, Misinformation and Biases Infect Social Media, Both Intentionally and Accidentally

  • Using Noun Clauses to State Position
  • Hedging & Subject-Verb Agreement, part 2
  • Analyzing Text for Present Perfect Verbs

Answer keys for each of the grammar activities are found in the answer key chapter.


Using Noun Clauses to State Position


Exercise 1: Using Noun Clauses to State Position

Noun Clauses Stating Positions using THAT

(* for more detailed information about how noun clauses work, check the sentence structure glossary)

Noun clauses are often used in academic writing to state positions make claims about the topics being discussed. These statements can:

  •  Present a claim or belief
  • EXAMPLE: One group claims that our ability to reason is hijacked by our partisan convictions: that is, we’re prone to rationalization.
  • Show support or agreement
  • EXAMPLE: The good news is that psychologists and other social scientists are working hard to understand what prevents people from seeing through propaganda.
  • Show opposition or disagreement
  • EXAMPLE: The bad news is that there is not yet a consensus on the answer.
  • Present evidence as support
  • EXAMPLE: Some of the most striking evidence used to support this position comes from an influential 2012 study in which the law professor Dan Kahan and his colleagues found that the degree of political polarization on the issue of climate change was greater among people who scored higher on measures of science literary and numerical ability than it was among those who scored lower on these tests.

Read the following paragraphs from the article. Find the noun clauses. How are they being used in each case? What verbs do they follow?

Be careful – THAT does not always mark a noun clause. Remember that a noun clause occurs after a verb. Also notice that there are no commas used with this type of dependent clause.

  1. It’s not surprising that there’s so much disinformation published: Spam and online fraud are lucrative for criminals, and government and political propaganda yield both partisan and financial benefits. But the fact that low-credibility content spreads so quickly and easily suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.
  2. Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause information overload. That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that some ideas go viral despite their low quality – even when people prefer to share high-quality content.
  3. In fact, in our research we have found that it is possible to determine the political leanings of a Twitter user by simply looking at the partisan preferences of their friends. Our analysis of the structure of these partisan communication networks found that social networks are particularly efficient at disseminating information – accurate or not – when they are closely tied together and disconnected from other parts of society.
  4. Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the homogeneity bias.
  5. Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this popularity bias, because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

 


Noticing Hedging


Instructions: Read the following passages from the article “Misinformation and Biases Infect Social Media, Both Intentionally and Accidentally” and highlight all hedging expressions that you can find. 

Paragraph 5. To avoid getting overwhelmed, the brain uses a number of tricks. These methods are usually effective, but may also become biases when applied in the wrong contexts.

Paragraph 9. In fact, in our research we have found that it is possible to determine the political leanings of a Twitter user by simply looking at the partisan preferences of their friends. Our analysis of the structure of these partisan communication networks found social networks are particularly efficient at disseminating information – accurate or not – when they are closely tied together and disconnected from other parts of society.

Paragraph 10. The tendency to evaluate information more favorably if it comes from within their own social circles creates “echo chambers” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into “us versus them” confrontations.

Paragraph 11. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were almost completely cut off from the corrections made by the fact-checkers.

Paragraph 13. The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

Paragraph 15. Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Paragraph 18. All these algorithmic biases can be manipulated by social bots, computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s Big Ben, are harmless. However, some conceal their real nature and are used for malicious intents, such as boosting disinformation or falsely creating the appearance of a grassroots movement, also called “astroturfing.” We found evidence of this type of manipulation in the run-up to the 2010 U.S. midterm election.

Paragraph 20. Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

Paragraph 21. These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Paragraph 22. Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are many questions left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Paragraph 23. Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will not likely be only technological, though there will probably be some technical aspects to them. But they must take into account the cognitive and social aspects of the problem.


Subject-Verb Agreement – Error Correction


Read the following sentences. Find and correct errors in subject-verb agreement.

  1. Social media is among the primary sources of news in the U.S. and across the world. (1 error)
  1. It’s not surprising that there are so much disinformation published: Spam and online fraud is lucrative for criminals, and government and political propaganda yield both partisan and financial benefits. But the fact that low-credibility content spreads so quickly and easily suggests that people and the algorithms behind social media platforms is vulnerable to manipulation. (3 errors)
  1. Our research have identified three types of bias that makes the social media ecosystem vulnerable to both intentional and accidental misinformation. (2 errors)
  1. People is very affected by the emotional connotations of a headline, even though that’s not a good indicator of an article’s accuracy. Much more important are who wrote the piece. (2 errors)
  1. Another source of bias come from society. When people connects directly with their peers, the social biases that guides their selection of friends come to influence the information they see. (3 errors)
  1. The tendency to evaluate information more favorably if it comes from within their own social circles create “echo chambers” that is ripe for manipulation, either consciously or unintentionally. (2 errors)
  1. To study how the structure of online social networks make users vulnerable to disinformation, we built Hoaxy, a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections show that Twitter accounts that shared misinformation was almost completely cut off from the corrections made by the fact-checkers. (3 errors)
  1. The third group of biases arise directly from the algorithms used to determine what people see online. Both social media platforms and search engines employs them. (2 errors)
  1. For instance, the detailed advertising tools built into many social media platforms lets disinformation campaigners exploit confirmation bias by tailoring messages to people who is already inclined to believe them. (2 errors)
  1. Another important ingredient of social media are information that are trending on the platform, according to what is getting the most clicks. (2 errors)
  1. Most social bots, like Twitter’s Big Ben, are harmless. However, some conceals their real nature and is used for malicious intents, such as boosting disinformation or falsely creating the appearance of a grassroots movement, also called “astroturfing.” (2 errors)
  1. Even as our research, and others’, show how individuals, institutions and even entire societies can be manipulated on social media, there is many questions left to answer. It’s especially important to discover how these different biases interacts with each other, potentially creating more complex vulnerabilities. (3 errors)

Analyzing Text for Present Perfect Verbs


Exercise 1: Analyzing text for Present Perfect

After reviewing the uses of present perfect and simple past, reread the following paragraphs from the article.

  • Underline present perfect and simple past verbs you see.
  • Why did the author choose to use present perfect in some cases and simple past in others?
  • Notice the present tenses as well. When do the authors use simple present? When do they use present continuous? Why?

 

  1. Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our Observatory on Social Media at Indiana University is building tools to help people become aware of these biases and protect themselves from outside influences designed to exploit them.
  2.  In fact, in our research we have found that it is possible to determine the political leanings of a Twitter user by simply looking at the partisan preferences of their friends. Our analysis of the structure of these partisan communication networks found social networks are particularly efficient at disseminating information – accurate or not – when they are closely tied together and disconnected from other parts of society.
  3. To study these manipulation strategies, we developed a tool to detect social bots called Botometer. Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as 15 percent of Twitter accounts show signs of being bots.
  4. A great deal of research in cognitive psychology has shown that a little bit of reasoning goes a long way toward forming accurate beliefs. For example, people who think more analytically (those who are more likely to exercise their analytic skills and not just trust their “gut” response) are less superstitious, less likely to believe in conspiracy theories and less receptive to seemingly profound but actually empty assertions (like “Wholeness quiets infinite phenomena”). This body of evidence suggests that the main factor explaining the acceptance of fake news could be cognitive laziness, especially in the context of social media, where news items are often skimmed or merely glanced at.

To test this possibility, we recently ran a set of studies in which participants of various political persuasions indicated whether they believed a series of news stories. We showed them real headlines taken from social media, some of which were true and some of which were false.

 

 

Share This Book