With the exponential growth of social media, online news sources (credible and otherwise), chat groups, and artificial intelligence (AI), more and more people are using the internet as their primary source of news and information. While much of this information is credible, much is not. And emerging tools like AI are making it increasingly difficult to separate the accurate from the harmful.
In an effort to help local consumers of information sort through this, the Friends of the Edmonds Library offered a Tuesday afternoon program aimed at helping us become more savvy in identifying and acting on misinformation. Led by Jason Young and Cindy Aden, both representatives from the University of Washington’s Information School, the session focused on the important role that emotion plays in misinformation, and how to improve and build your personal digital acumen so you are better prepared to explore online.
“One distinction I want to make right off the bat is the difference between misinformation and disinformation,” began Aden. “It comes down to where the information comes from and why. Misinformation is inaccurate information that may be unintentional. Disinformation on the other hand is formulated to intentionally deceive and manipulate. Sometimes it’s difficult to tell these apart. Knowing the source can help, but it’s not foolproof. But one thing is for certain: with the growth in social media, it is much easier to spread misinformation.
“The concept of alternative facts is almost mind-boggling,” she continued. “Folks can say whatever they want, regardless of whether it has any basis in fact. It allows them to stay within their own realm of what they think should be true. This leads to an effect we call conformation bias where you only look for information that supports your chosen beliefs and at the same time you ignore information that doesn’t support your point of view.”
She went on to explain that part of confirmation bias is that our brains are always looking for a more efficient way to interpret information and have a natural bias to maintaining what we already believe. It’s simply easier for the brain to say, “oh yeah, this fits with what I already believe” than to say, “oh wait, this fact doesn’t fit so I need to re-examine and perhaps change my core beliefs.”
In the latter case, your brain is forced to either reinterpret the new information to fit its chosen beliefs, ignore the new information, or – most difficult of all – change its core beliefs.
“Our brain looks for familiar patterns, and it’s truly frightening to give these up,” she explained. “We want to find something safe and familiar. It’s extremely difficult for folks with deeply entrenched beliefs to change their minds.”
And social media feeds this. One way is by using algorithms that in effect curate your information by looking for things similar to what you’ve looked for before and then presenting them to you. And the advent of new and more sophisticated techniques using AI and deep fakes adds to the problem of differentiating between what is real and what is not.
“And for increasing numbers of folks, social media is their only source of information,” she added.
“So there’s the problem,” said Jason Young as he took over the session. “For the rest of today, we’ll be looking at what we can do about it.”
He went on to explain that the two major drivers of the misinformation problem are changes in technology and a social shift that has resulted in many folks simply not wanting the truth anymore. They want to embrace false things because they’re more comfortable that way – that’s classic confirmation bias.
“First technology,” he continued. “Today anyone with a cell phone and an internet connection can say anything they want and people on the other side of the world hear it instantly. AI can produce fake photos (remember when we used to say pictures don’t lie – we can’t say that anymore), and fake videos, where a misinformation purveyor can make politicians and officials – anyone really – look like and say whatever they want.”
He pointed out that this kind of misinformation targets not just your mind, but your emotions as well. By playing on emotions the misinformation purveyor hopes to get you to react more quickly than you should by confusing you and getting you to take action without thinking.
“The classic example is getting a call from a grandchild in trouble who needs money right away,” he said. “The voice sounds right, and the kid knows your name, so you rush to the 7-Eleven, buy a cash card and give the number to the scammer without taking time to think it through. Only later you learn that the child’s voice was faked and your money is gone.”
Young then presented a simple technique to raise media literacy and help you see through misinformation. It’s called SIFT – the S stands for stop; the I for investigate; the F for find other coverage; and the T for trace the information to the original source.
“Stop is the vital first step,” he explained. “When you find potentially suspicious information, don’t immediately act on it, share it or anything else. Next check out the source – ask yourself is there is a reason you should believe it, is it credible, is it something you’ve heard of before? If it smells funny, do a Google search on the source’s name. Then check the subject matter – Google is great for this too. Has anyone else reported on this? Can you find where the story originated? If an image is involved do a Google image search to see where else it’s been published.”
He then provided an example from a real post on X (formerly Twitter):
“You see this, and it’s emotional, it’s scary,” he explained. “A zombie virus that poses a threat to humans, comes from Russia, and will be unleashed by climate change. Pushes lots of buttons, right?”
But go back to SIFT. First check out the source, Buzzing Pop – is it credible, has any other news organization reported this? Check for other information about zombie viruses. Check the photo to see if it’s real, created, or from somewhere unrelated.
“This will take you less than five minutes with Google,” he added.
Young then took up how to detect false images and videos generated or altered by AI.
“AI is getting better all the time,” he explained, “and it’s getting harder to see through it. At this time, deep fakes still have trouble with image backgrounds, eyeglasses, teeth and hair, so if these don’t look quite right, you’re likely dealing with a fake, but newer AI techniques are making it harder to tell.”
In the two images above, the girl on the left is real. But look at the man on the right. The hair doesn’t look natural, and the background is mottled. He’s a fake.
“But our old SIFT technique still applies,” he concluded. “When you encounter something that seems odd, stop and take a moment to look closely. Is it asking you to do something weird? Is it trying to make you click on a link?”
If you do a Google image search on a person’s face, it will give info on where that face has appeared before – if there’s no backstory, you’re likely dealing with an AI-generated fake.
“The bottom line is to be careful. Remember that people have lied and hidden truths forever, but technology is making it so much easier and widespread.” he added. “Don’t share something (especially if it’s sensational and you just feel drawn to click it) on social media until you check the sources and do the research. Try to be aware of your own confirmation biases, avoid following clickbait, and don’t get pulled in by buzzwords and emotional content. The purveyors of misinformation are getting better all the time, but that doesn’t mean there isn’t a lot of good information out there, and little bit of thought and fact-checking will go a long way to helping you separate the good from the bad.”
— Story and photos by Larry Vogel
Kudos for this!
Disinformation and misinformation have been around for a long time. Back in the early 90’s, I remember having many discussions with my grandma when she was about to fall for pyramid schemes, “Nigerian Prince” scams, or giving money to televangelists. I think the most difficult part to overcome is one’s willfulness to believe it. She thought she was going to get rich or was saving someone who was in desperate need. She didn’t want to believe that she was being scammed, because that would mean that she was gullible. It was fortunate that she trusted to at least ask me before any money left her hands. I’m definitely not the smartest person in the room, but being outside the situation helped me evaluate it better than her.