Meta has announced that later this year, it will introduce a new safety tool to prevent kids from sending and receiving nude photos, especially in encrypted chats.
Adults can also use the tool on Facebook and Instagram, and it is probably optional.
It comes after the government and police criticised Meta for encrypting Messenger conversations by default.
They claim that encryption will make it more difficult for the company to identify child abuse.
The new feature, according to Meta, is exclusively intended to shield users—under-13s are not permitted to use its platforms—especially women and teenagers, from receiving or being coerced into sending nude photos.
It also revealed that messages from strangers on Instagram and Messenger would by default not be accessible to minors.
The increase in child sexual offences in England and Wales was attributed, according to police chiefs, to children sending nude photos earlier this month.
Additionally, court documents recently made public as part of a US lawsuit against Meta claim that the company has evidence that 100,000 teenage Facebook and Instagram users are subjected to daily online sexual harassment. The lawsuit, according to Meta, misrepresents its work.
To help shield teenagers from offensive images in their messages, the tech giant unveiled a new feature on Thursday.
More information about this system’s functionality in encrypted chats will be made public later this year.
Government, law enforcement, and prominent children’s charities have strongly criticised Meta’s recent decision to impose end-to-end encryption (e2ee) as a default security measure for Facebook Messenger chats.
E2ee indicates that only the sender and the recipient can read messages, which critics claim prevents Meta from identifying and reporting any content that may contain child abuse.
Other messaging apps that use this technology and have vigorously defended it include Apple’s iMessage, Signal, and the Meta-owned WhatsApp.
But some critics contend that in order to identify child abuse being transmitted through encrypted apps, platforms ought to implement a method known as client-side scanning.
Systems on a user’s device that check messages for matches with publicly available images of child abuse before they are encrypted and sent are referred to as client-side scanning. The company is notified of any messages that may contain unlawful content.
The new system from Meta “shows that compromises that balance the safety and privacy rights of users in end-to-end encrypted environments are possible,” according to children’s charity the NSPCC.
Meta claims that its new feature does not involve client-side scanning, which it claims compromises the primary privacy-preserving function of encryption, which is that messages are only known to the sender and recipient.
According to the BBC, it will only use machine learning to detect nudity and run completely on the device. Meta claims that it is far more difficult to use machine learning to detect child abuse, and that if it were done across its billions of users, there would be a significant risk of error and the possibility of innocent people being reported with dire consequences.
Rather, a number of methods are employed to safeguard kids, which, in Meta’s opinion, do not compromise privacy. These methods include:
Systems that recognise suspicious behaviour in adults and prevent them from interacting with minors or from tracking down other suspicious adults.
limiting the ability of adults over the age of 18 to message teenagers who do not follow them in order to prevent adults from communicating with minors.
New tools
In addition to releasing several new child safety features on Thursday, Meta claims to have launched more than thirty tools and resources to assist in keeping kids safe.
According to an announcement, kids will by default not be able to receive messages on Instagram or Facebook Messenger from users they do not follow or are not connected to.
Adults are already prohibited from messaging teenagers who disregard meta policies.
Meta posted a blog post titled, “This new default setting limits who can message or add teens to group chats to people they already follow or are connected to.”
Parents can now refuse requests from teenagers to modify their default safety settings, like who can direct message them or whether they can view more sensitive content, using the parental supervision tools. They were only informed of changes in the past.
Source | BBC
Comments are closed, but trackbacks and pingbacks are open.