Google and YouTube publish plans to combat medium-term misinformation

As part of the effort, Google plans to launch a new tool in the coming weeks that will highlight local and regional journalism about campaigns and races, the company said in a blog post. Searches for “How to vote” in English and Spanish will soon return highlighted information sourced from state election officials, including important dates and deadlines based on the user’s location, as well as instructions on acceptable ways to vote.

Meanwhile, YouTube said it will highlight mainstream news sources and display labels under videos in English and Spanish that provide accurate election information. YouTube said it is also working to prevent it from algorithmically recommending “harmful voting misinformation” to viewers.

The announcement marks the latest attempt by a big tech platform to persuade the public that it is ready for a high-stakes campaign that could dramatically reshape the Congressional agenda, including upcoming legislative battles over how the U.S. will control the platforms themselves regulate.

It comes as many of the underlying issues arising from the 2020 presidential election, including baseless allegations of voter fraud and false claims about the election result, remain unresolved, fueled in some cases by the very candidates standing this year apply for an office. And even as tech companies have pledged their vigilance, disinformation experts warn, extremists and others looking to pollute the information environment are evolving their tactics, creating the possibility of new exploits that platforms didn’t anticipate.

YouTube has already begun removing Midterm-related videos that made false claims about the 2020 election in violation of its policies, the company said in a blog post.

“This includes videos that violated our Election Integrity Policy by alleging widespread fraud, error or interference in the 2020 US Presidential Election, or alleging that the election was stolen or rigged,” said YouTube.

This policy goes further than what Twitter and Meta, the parent company of Facebook and Instagram, announced for the midterm. twitters Politics of Civic Integritywhich is active for the midtermsbans claims aimed at undermining “public confidence” in the official results – but while tweets questioning the result can be flagged or disqualified from participation, the company has not pledged to remove them.
Meta said this month that their mid-term plan will include removing false claims about who can vote and how, and calls for election-related violence. But Meta came close to banning claims of rigged or fraudulent elections and the company said the Washington Post such claims will not be removed.

While both Twitter and Meta will rely on flagging claims of vote-rigging, each appears to be taking a different route. Twitter said last year it was testing new labels for misinformation that are more effective in reducing the spread of false claims, suggesting the company may be leaning even more on the label. But Meta has said it’s likely to make fewer labels than it did in 2020, due to “feedback from users that those labels have been overused.”

Aside from responding to false claims and misinformation, or promoting reliable information, tech companies still need to thoroughly rethink their core functions, said Karen Kornbluh, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund.

“The design of the system encourages inflammatory content and allows for user manipulation,” Kornbluh said. “The Facebook whistleblower has shown, and we see it on other platforms, that algorithms themselves encourage extremist organizing. We know that threat actors have used social media as a customer relationship management system for extremist organizing in preparation for January 6th. They work across platforms to plan, create invite lists, and then generate decentralized new groups of foot soldiers. Those design loopholes are what the platforms need to address.”

Comments are closed.