Facebook Messenger users have expressed unease over the platform’s latest update, which claims to shield end-to-end encrypted chats from scammers. How exactly does that work if the app is safe from prying eyes?
The company said that pop-up notices will appear “when something doesn’t seem right” – a new “safety feature” designed to guard against “potentially harmful interactions and possible scams.”
Messenger’s Secret Conversations feature offers end-to-end encryption, but Facebook says that it has found a way to detect dangerous behavior without infringing on privacy. The tech giant claims that its new system relies solely on metadata, and does not analyze the content of messages. In practice, this means that Facebook analyzes every other aspect of the conversation – including who is participating in the chat, where the messages are being sent from, how quickly messages are being sent, and other factors that the company believes are relevant to sniffing out scammers.
The new safety measure was met with considerable skepticism, however.
“That sounds like ‘control’ and ‘censor,’” noted one displeased user, who questioned the wisdom of allowing Facebook to “decide” what constitutes harmful interactions.
Others joked that Facebook should do more to safeguard users from its own controversial practices.
But with more than 1 billion users, it doesn’t appear that Facebook’s bottom line will be hit by the new policy. After announcing the safety feature, one disinterested user begged the company to reintroduce polls to Messenger – a request which was granted in the latest update.
The tech giant has come under increasing scrutiny for its efforts to police speech on its platforms. CEO Mark Zuckerberg recently unveiled a “Supreme Court” that will serve as an “independent content oversight board” for the site, but the panel of “experts” has been accused of having a clear liberal bias.