During a recent government meeting, discussions centered on the definitions and implications of disinformation and misinformation, highlighting the challenges faced by social media platforms in regulating content.
Mister Harris, a key witness, defined disinformation as information deliberately distributed to mislead, while misinformation refers to false information that may not be intentionally spread. He emphasized that both forms of misleading information are concerning, but disinformation poses a greater threat in the current context.
The conversation shifted to the role of social media companies in moderating content. Harris clarified that while he previously worked on a team at Facebook that supported third-party fact-checking, he does not believe social media platforms should act as arbiters of truth. Instead, he advocated for partnerships with independent fact-checkers, such as those affiliated with reputable news organizations like Reuters and the Associated Press. These third-party fact-checkers, credentialed by the International Fact-Checking Network, provide a framework for social media companies to label and link to verified information.
However, the discussion revealed a divide in perspectives. One participant expressed skepticism about relying on news organizations as arbiters of truth, questioning their track record and suggesting that their determinations of misinformation could be flawed. This exchange underscored the ongoing debate about the responsibilities of social media platforms in combating misleading information and the complexities of defining truth in the digital age.
As the meeting concluded, it was clear that the issue of disinformation remains a pressing concern, with calls for more effective strategies to address the challenges posed by both misinformation and disinformation on social media.