Google and YouTube publish plans to combat medium-term misinformation
As part of the effort, Google plans to launch a new tool in the coming weeks that will highlight local and regional journalism about campaigns and races, the company said in a blog post. Searches for “How to vote” in English and Spanish will soon return highlighted information sourced from state election officials, including important dates and deadlines based on the user’s location, as well as instructions on acceptable ways to vote.
Meanwhile, YouTube said it will highlight mainstream news sources and display labels under videos in English and Spanish that provide accurate election information. YouTube said it is also working to prevent it from algorithmically recommending “harmful voting misinformation” to viewers.
The announcement marks the latest attempt by a big tech platform to persuade the public that it is ready for a high-stakes campaign that could dramatically reshape the Congressional agenda, including upcoming legislative battles over how the U.S. will control the platforms themselves regulate.
YouTube has already begun removing Midterm-related videos that made false claims about the 2020 election in violation of its policies, the company said in a blog post.
“This includes videos that violated our Election Integrity Policy by alleging widespread fraud, error or interference in the 2020 US Presidential Election, or alleging that the election was stolen or rigged,” said YouTube.
While both Twitter and Meta will rely on flagging claims of vote-rigging, each appears to be taking a different route. Twitter said last year it was testing new labels for misinformation that are more effective in reducing the spread of false claims, suggesting the company may be leaning even more on the label. But Meta has said it’s likely to make fewer labels than it did in 2020, due to “feedback from users that those labels have been overused.”
Aside from responding to false claims and misinformation, or promoting reliable information, tech companies still need to thoroughly rethink their core functions, said Karen Kornbluh, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund.
“The design of the system encourages inflammatory content and allows for user manipulation,” Kornbluh said. “The Facebook whistleblower has shown, and we see it on other platforms, that algorithms themselves encourage extremist organizing. We know that threat actors have used social media as a customer relationship management system for extremist organizing in preparation for January 6th. They work across platforms to plan, create invite lists, and then generate decentralized new groups of foot soldiers. Those design loopholes are what the platforms need to address.”