According to government guidance, “The Online Safety Bill is a new set of laws to protect children and adults online. It will make social media companies more responsible for their users’ safety on their platforms.”
The bill’s introductory page says:
“The draft Online Safety Bill delivers the government’s manifesto commitment to make the UK the safest place in the world to be online while defending free expression.”
The bill aims to remove illegal content, protect children from harmful content, enforce age limits of users accessing age-appropriate content, mandates risk assessments and introduces the ability to report problems online. However, it also introduces some intrusive measures to see what you’re sending to your friends through tools such as WhatsApp. On this aspect, the government is either uninformed or dishonest or both.
A long time coming but no dialogue
The legislation has had an unusually long period of development, under five different digital ministers and four different prime ministers. More and more has been stuffed into it as it has slowly lumbered on through different administrations.
As the bill directs how technology is used, it is the equivalent of an IT policy within a very large organisation (in this case, the UK). Normally, when this type of document is produced, there is a good deal of dialogue between the board of directors (the cabinet and prime minister in this instance) and the staff who implement procedures on a day-to-day basis (meaning UK companies who are bound by the new legislation). This is to ensure that the policy direction is within the bounds of what is technically feasible.
It doesn’t appear that any such dialogue happened in this case.
There is a lot to unpack, but the aspects that seem to cause the most controversy relate to restricting access to harmful content, age verification for users and the need for inspection of end-to-end encryption (that’s WhatsApp, Signal and Telegram to you and me).
What is harmful content? Well, how long is a piece of string? It all depends, doesn’t it. In this case, the direction on the gov.uk website is:
“Some content is not illegal but could be harmful or age-inappropriate for children. Platforms will need to prevent children from accessing it.“
The two highlighted parts of the text prevent any meaningful automation of this task. Content may not be illegal, but the home secretary may decide that it is unsuitable for young people.
Take Wikipedia for example. It has a number of factual pages relating to sexuality. Would any of those be considered unsuitable for young people? How would this be policed, when Wikipedia has more than a hundred thousand edits across its pages every day?
Children by default?
The bill requires providers to assume that their users are children by default. If the website has any material deemed to be harmful, then age verification must be introduced. This conflicts with Wikipedia’s commitment to collect minimal data about their readers and contributors. The company has made clear that it may block UK users from accessing its web site. It will be interesting to see the effect of this on coursework produced at schools and universities across the country.
Another gap between reality and policy relates to the government’s wish to intercept potentially harmful communication. This raises the issue of end-to-end encryption. When you use apps like WhatsApp, the encryption takes place before it leaves the app and travels across the internet. It’s then decrypted by the app on the recipient’s phone. This happens in most social media applications now, ever since Edward Snowden revealed that governments will intercept and read anything and everything that they possibly can if they’re given the opportunity.
How encryption works
This website takes you to a tool which is used to encrypt text using the industry standard algorithm AES (Advanced Encryption Standard):
The following is the result of encrypting text through the aforementioned app:
Can you see which part of the text is harmful? Of course you can’t! Not without decrypting all of it. And you can’t just decrypt a section, because the encryption of each little chunk relies on the result of the encryption of the previous chunk.
Try putting the text above through the decryption tool mentioned above. You’ll need to enter the following:
- Text to be decrypted: copy the above encrypted text
- Cipher Mode: CBC
- IV: 1234567891234567
- Key Size: 256
- Secret Key: HereIsSomeReallyLongTextThatsKey
You can’t decrypt the text without the key. By keeping your key secret, you’re free to share the ciphertext with others. Only another person who has this key will be able to read the message. Exchanging the key with that other person is another matter and requires the use of public/private key cryptography.
Government snooping – an existential threat?
Michelle Donelan (the digital minister) seems not to understand encryption. She has stated that the government can read particular information, while allowing the user to have encryption.
Organisations such as the NSPCC have stated that tech companies should innovate to provide this type of message inspection, but this is not just a technical problem – it requires a complete rethink of how we use encryption in applications and is not trivial.
The only feasible option would be examination of the text before it is encrypted. This leaves an option for everything you say to be read somehow, and for that unencrypted text to be leaked to a hacker. Would you want any bank details, or other sensitive information that you send, hacked by a criminal gang?
Once again, the simplest way for large companies to avoid prosecution is to withdraw products from the United Kingdom. Apple, WhatsApp, and Signal have indicated that this is the course of action that they will take. They’re not prepared to weaken the encryption for their products.
The consequences do not appear to have been thought through. Losing tools and providers cannot have been the objective of the UK government.
This snooping on end-to-end encryption is reminiscent of the Clipper Chip affair in the USA, where the NSA promoted a chip that had a backdoor. The encryption technique, which was not made available for public review, was shown to have vulnerabilities. It also could not be applied to devices made abroad that might then be brought into the USA. Both the public and several US senators were opposed to this, resulting in a failure to get the Clipper Chip adopted.
What right does the government have to view your private messages? There is certainly an argument that law enforcement would be able to protect the safety of the public if it was able to read messages sent by criminal gangs.
However, does any government know where to stop? Snowden exposed how much of our information the UK government chose to snoop on. That’s what prompted many social media companies to introduce end-to-end encryption. Apple and Google decided to lock down their devices as a result of this, applying encryption so that they themselves would not be able to access data on a user’s device. How can we be sure that past behaviour would not be repeated? Especially if the government decides to remove other protections from the UK, such as the European Convention on Human Rights (ECHR).
This bill is not balanced. It is designed to respond only to the needs of those who are concerned about potential harms, without considering the massive unintended consequences.
The clock is ticking
This bill deserves a proper overhaul and review by all the relevant experts before it completes its journey through parliament. Sadly that will not happen. The bill has its third reading in the Lords on 6 September, and it will become law soon after. That will be a tragedy.
We need your help!
The press in our country is dominated by billionaire-owned media, many offshore and avoiding paying tax. We are a citizen journalism publication but still have significant costs.
If you believe in what we do, please consider subscribing to the Bylines Gazette from as little as £2 a month 🙏