Blog: The UK’s Take on Online Harm Forgot This One Important Solution
The spirit of the UK Government’s recent white paper on online harm may be welcome, but it doesn’t come without controversy. It’s time to consider using advances in AI and mobile hardware to start detecting harmful content directly on the devices themselves.
Considered a virtual wild west, the internet has resisted governance for decades, which is why the UK government’s recent white paper on online harm has been the talk of this lawless town. Frustration with big tech’s slow development of safety tools is fully understandable, especially when it comes to child safety and the proximity online activity puts children to inappropriate content, child sexual abuse, grooming, cyberbullying and sexting, a major point of discussion in the paper.
But targeting service providers in particular completely misses the results that could be seen by looking at the other ways technology can safeguard children, like the very software that powers our personal electronic devices — operating systems such as iOS, Android and Windows.
This Department for Digital, Culture, Media & Sport (DCMS) and Home Office led undertaking has already been called out as potential “state regulation” over the speech of millions of British online users. Rather than target big tech’s responsibility, it is more likely to penalise the smaller players on the internet as the regulations govern pretty much any interactive platform on the web. But that doesn’t even cover the technical challenges online service providers will face with this regulation.
The challenges of regulation
For starters, there’s the issue of age verification. To apply certain restrictions, such as preventing children from accessing pornography, service providers need to first detect the user’s age without compromising individual privacy. If site owners started doing this, there’s a real danger that service providers, like PornHub, will also start collecting and storing sensitive personal information just to verify a person’s age. There’s not been a working solution to date and is by now a known fact that the government has been unsuccessfully trying to introduce age verification on porn sites since 2017.
Secondly, the white paper suggests regulations should apply to all companies of all sizes dealing with user-generated content, whether it’s public or private. That could include everything from social media platforms, instant messaging services, hosting sites, discussion forums, email and even search engines.
While the white paper states that “companies should invest in the development of safety technologies,” it should be acknowledged that detecting harmful content, whether it’s an image, audio, video or text, is technologically challenging. Building that technology from scratch is an expensive feat. Currently, only companies amassing vast amounts of data such as Google, Microsoft and Amazon provide solutions to some of the problems outlined in the paper. For these giants, it will be a big opportunity to sell these services to smaller companies making those minor players dependent on the big hitters. We’ve already seen how GDPR benefited tech behemoths and set back smaller operations.
The other issue is privacy. If a company doesn’t have the technology or the people to moderate content, it will share it with a third party. In most cases, it will be user-generated content — conversations, images, videos, etc. — which needs at least some level of human involvement. Facebook, YouTube, TikTok and other large social networks employ tens of thousands of people to moderate their content in addition to automated filters. Smaller companies will need to outsource this function with implications for privacy by funnelling more private user data to third parties.
Finally, it’s worth mentioning that end-to-end encryption means some service providers, e.g. WhatsApp, can’t moderate content transferred between users. The white paper lacks detail here on how the government will tackle this issue, but it’s hinted that WhatsApp’s end-to-end encryption is a potential risk to children. Does this white paper further pave the way for something the government attempted in the past — weakening end-to-end encryption in the UK?
Is there a better way to tackle online harm?
There are many unanswered questions in the white paper and finding the right answers will take months if not years. However, one important question we’d like to answer is whether regulating the vast number of online tech companies is the best way to improve child-safety online? Device intelligence can be a far better and quicker solution.
If you’re hearing this term for the first time, let us explain. Modern mobile devices are powerful enough to perform complex Artificial Intelligence tasks in real-time on the device itself, rather than running in the cloud.
Device intelligence, also referred to as edge computing, isn’t new. Apple performs face recognition to unlock iPhones and identify people in your digital photos. Google recently released on-device speech recogniser to power speech input on Gboard, Google’s mobile keyboard. And in the coming years, we’ll see devices become more intelligent by performing more of the tasks currently done in the cloud, on the device itself.
When it comes to child protection, device intelligence can help. Specialised Artificial Intelligence models running on a mobile device can detect indecent images, videos and harmful conversations such as bullying or grooming. The implementation approach can be similar to on-access virus scanning, where the virus scanner is automatically activated each time a particular file is accessed.
Safety controls such as Apple ScreenTime and Google FamilyLink, which empower parents, can be extended to block content or notify parents when certain types of harmful content are detected.
Using device intelligence powered content moderation then strikes out the need for age verification by service providers as parents will have greater control over what restrictions to enable and at what age. It also provides more flexibility. The white paper mentions, for instance “inappropriate material” for children, a pretty subjective notion — what is inappropriate for one parent, could well be appropriate for another.
With device-level controls, parents decide how long to keep certain restrictions in place and also strikes a better balance for privacy. User activity doesn’t need to be shared with third parties as data never leaves the device and can alert parents whenever a harmful activity is detected through linked parental controls.
For more straightforward tasks, actions can be blocked immediately. Sexting, for example, could be detected and prevented as soon as a photo is taken.
And end-to-end encryption won’t need to be compromised. Operating systems running on the device have access to the content after it’s been transferred and decrypted. For example, in the case with images received in WhatsApp, the operating system can “see” those images while Facebook cannot. Such an approach can protect children not only from harmful online content but also content transferred or accessed while offline using Apple’s Airdrop or any other Bluetooth powered peer-to-peer sharing service.
Making online child-safety a reality
We only need three companies to make it happen — Apple, Google and Microsoft. Because unlike the regulation of countless service providers handling user-generated content, the operating systems of this trio cumulatively power 97 per cent of all internet-connected user devices.
Most of them already have AI tools that can target online harm like explicit image detection. Other areas may require more work and optimising these models to run on mobile devices in real-time can be challenging however.
What’s more, all three operating systems need to embed these controls to their operating system and parental control mechanisms. Right now there’s no way for third-party application developers to access the necessary depths of the operating system to implement such solutions.
Device intelligence alone won’t solve all the problems the government has flagged in the white paper, but at least in the context of child safety, this solution could reach more parents and children in much swifter time.