Who is concerned about getting COVID in 2020
Andy, an insider who is concerned about getting COVID in 2020. He can’t read them all, so he asks his dependable buddies for advice instead. Andy initially rejects the concept when someone writes on Facebook that pandemic fears are exaggerated. However, when the hotel where he works closes, Andy begins to question how serious the virus danger actually is as his job is now in jeopardy. After all, no one he knows has perished. Andy’s mistrust of the government is echoed by a coworker’s post about the COVID “scare” being manufactured by Big Pharma in concert with dishonest politicians.
His inquiry on the internet immediately leads him to sites that contend that COVID is not worse than the flu. When Andy first joins an online community for those who have been laid off or fear being so, he quickly discovers that many of the other members are questioning, “What pandemic?” He makes the decision to go with some of his new pals when he finds out they’re going to a rally calling for lockdowns to end. He is one of almost no one wearing a mask at the huge demonstration. Andy reveals his belief that COVID is a scam when his sister inquires about the rally because it has now become a part of who he is.
This illustration shows the minefield of cognitive biases
We favour receiving information from those in our inner circle whom we trust. Risks—for Andy, the possibility of losing his job—are something we pay attention to and are more inclined to discuss. When something fits nicely with what we already know and understand, we look for it and remember it. These prejudices are byproducts of our evolutionary history, and for tens of thousands of years, they were beneficial to us. People who acted in line with them—for instance, avoiding the overgrown pond bank where someone allegedly had a viper—were more likely to live.
However, these prejudices are being amplified negatively by modern technologies. Search engines lead Andy to websites that fuel his suspicions, and social media links him with like-minded individuals, escalating his anxieties. Even worse, bots—automated social media accounts that mimic people—allow misinformed or malicious actors to exploit his weaknesses.
More Related Links!!!
The abundance of information available online exacerbates the issue. The information market is overrun with memes as a result of how cheap and simple it has become to view and create blogs, movies, tweets, and other informational units termed memes. We are unable to digest all of this information, therefore our cognitive biases choose what we should focus on. These cognitive shortcuts adversely affect the information we seek for, understand, retain, and repeat.
It has become vital to comprehend these cognitive weaknesses and how algorithms exploit or use them. Our teams are using cognitive experiments, simulations, data mining, and artificial intelligence to understand the cognitive vulnerabilities of social media users at Indiana University Bloomington and the Observatory on Social Media (OSoMe, pronounced “awesome”) at the University of Warwick in England. The computer models built at Indiana are informed by insights from psychological research on the evolution of knowledge done at Warwick, and vice versa. To combat manipulation on social media, we are also creating analytical and machine-learning tools. To identify fake actors, track the spread of false narratives, and promote news literacy, some of these technologies are already in use by journalists, civil society organisations, and individuals.