Tuesday, 3 October 2017

Amber Rudd - you do not need to understand encryption

Amber Rudd has made it clear that she feels she does not need to understand encryption. See BBC article here.

Really this is not actually an issue on encryption at all. You do not need to understand it, no.

That said, the principles are not hard to understand, and Amber Rudd could take the time to understand those principles. I am sure there are many trusted advisers who will be happy to explain them. It would help understand the sneering and patronising responses if she understood why her suggestions and comments are so comically stupid.

But let us try to put this in terms a politician should be able to understand.

There is an activity which is common in modern society. We'll try and understand how any activity could be considered for legislation, whether encryption or not.

That activity is conducted by bad actors. In this instance the bad actors are terrorists and extremists, one of the statistically lowest threats we face in modern society, but an issue which is disproportionately important to politicians for some reason.

That activity is conducted by good actors. Indeed, it is used by a lot of people every day. It is hard to find anyone that does not absolutely rely on this activity every day, either directly or indirectly. Everyone with a bank account relies on this activity.

Now, because the activity is conducted by bad actors, it seems that something must be done. It is worth bearing in mind that this is not always the case, and indeed, given that the bad actors in this case, terrorists, represent less of a danger than slipping on a banana skin, the idea of not doing anything is not completely stupid.

So what can be done about this activity. Can it be banned? Can it be restricted? Can it be changed? Can it be controlled? Well, this is where understanding the activity may help, but let us assume it can be controlled in some way for a moment.

The next question, assuming some legislation can be made that will somehow restrict or control the activity, what are the consequences of doing so?

There are two main issues.
  1. Will the restrictions impact the bad actors at all?
  2. Will the restrictions impact the good actors at all?

In this case, we can look at the activity being encryption and we look at these points.

Will the restrictions impact the bad actors at all?

MATHS EXISTS! No matter what law you make it is possible for the bad actors to make use of encryption. It is impossible to un-invent mathematics and encryption.

So, we know the answer to point 1 - will this impact the bad actors? Well, not really - they can move on to other apps, other tools, their own apps. They do not even need to do anything difficult or complex. Even if what they do is illegal, they can still do it. There are even ways of hiding what they are doing so you cannot tell so cannot convict them of breaking those laws. See the video at the end of this post for how to encrypt with pen and paper and dice. Maths cannot be un-invented, sorry.

[update: some useful comments on this below] I agree that it is not quite so simple. I cannot say that terrorists will simply use other apps. I can say that open source communities and privacy activists make good quality apps and not some dodgy "home grown" broken crypto, and they are even working on ways to make those apps invisible to police states and oppressive governments, so the apps to use will exist. It seems odd that terrorists would not make use of them. The issue here is that catching one terrorist by such a measure is not worth it - indeed, if you could guarantee to catch every terrorist ever it still would not be worth it - they still are so few and harm so few - we need evidence based laws and policies and it amazes me terrorists are even on the radar ahead of bee stings.

Will the restrictions impact the good actors at all?


This has been seen over and over again, and the industry is in a constant battle against criminals. A lot of criminals that cost millions of pounds every day one way or another, and exploit companies, and normal people. Unlike terrorism, this is a big issue impacting a lot of people. The battle is now at the stage that the best defence against criminals is end to end encryption which means that even the intermediate companies cannot see the communication. This is because attacks on the data via those intermediate companies is a real threat where criminals can get in (technically or social engineering, etc). So people rely on this level of security, all the time, every day, for their banking, their medical records, everything.

So, now we know, any attempt to restrict encryption will impact the good actors. They will not be motivated to use other apps or do encryption themselves - why would they, as Amber Rudd says, normal people do not care if their WhatsApp chat is encrypted end to end or not (until they are victim of a crime, obviously). Only the bad actors will in fact be motivated to use alternatives.

So, you do not need to understand encryption really.

You just need to know that this activity is used for a minor threat (terrorism) and that any attempt to control it will not impact that threat but will impact all of the good uses of the activity.

Now you can make a choice of how to address the issue.

This is no different to seeing that terrorists use white vans, so banning them!
This is no different to seeing that terrorists use an underground map, so banning them!
This is no different to seeing that terrorists use ball point pens, so banning them!

It is a simple exercise to understand the options and consequences of those options and making the best decision for the country as a whole.


  1. "This is no different to seeing that terrorists use white vans, so banning them!"

    There was actually a suggestion after one of the recent vehicle-based attacks that van rentals should be more restricted, because apparently it's impossible to run people down using your own vehicle.

    I have also heard that after the Las Vegas attack some of the news agencies were questioning whether security checks should be introduced at hotels, as if the gunman would just have given up and gone home if he couldn't find a hotel to shoot from.

    The obsession with superstitiously restricting some random aspect of the latest terrorist's methodology, in the hope that it will magically prevent future attacks, is reaching ridiculous levels. Perhaps after the next one they'll demand a ban on whatever brand of cereal the terrorist had for breakfast.

  2. Quite so.

    However, there is another side to this. If you introduce backdoors, criminals will exploit them -- but if you don't, the state of software security is so bad that criminals will just exploit something else instead!

    This actually proves that you don't need holes in the encryption: the state of software security is so bad that *our own security services* can and do exploit it in just the same way as the criminals do (except they don't tend to spread self-replicating software that sends spam because they don't want to be obvious).

    So this is all of a piece with the question of whether, when the security services find a vulnerability, they should get it fixed or hoard it. There's a difficult tradeoff here. If they both get everything they find fixed *and* don't have backdoors in encryption, then we end up with a worst of all worlds where the bad guys can use holes they found but our security services can't get in because they got them all fixed, *and* we still have huge numbers of holes and crappy software.

    The question is whether having *even more* holes is worth the security services being able to tap into stuff (assuming *this* to be harmless, a very big if).

    Certainly right now they have so many ways in that there is no point adding backdoors to anything, even if they worked, which as you note they don't. Backdoors in encryption will only catch the stupid terrorists and criminals, and you frankly don't need any extra powers to catch those. It's the smart ones that don't get caught that are the problem.

    1. At the moment though, the industry is constantly working to try and plug holes and improve security. The very use of end to end encryption is an example of that. It means that criminals mostly have to hack the end devices, and they are getting better all the time (well, there will always be new bugs as well), but it is a battle and it is being fought and does not need fighting with one hand tied behind its back!

  3. If we want a real discussion about this, let's try starting with real issues that apply to all services, and not with the fundamentals of the mathematics:

    1. E2E encryption means that my use of E2E encrypted services is not suspicious in and of itself; in turn, this means that if my employer, wife, or ISP intercepts E2E encrypted traffic, they have no grounds to suspect me of wrongdoing. If E2E is not the default, then merely using E2E encryption is itself suspicious - even if all I'm doing is trying to set up an amazing 20th wedding anniversary holiday...

    2. Would it be acceptable to tell targeted suspects that they no longer have E2E encryption on their chats? IOW, have WhatsApp tell them "you're no longer protected" whenever E2E is disabled for that user. If not, what prevents a criminal hacker from triggering the "no E2E" for a user without warning them that's also used by Law Enforcement?

  4. There has been comment today that she wants it to be illegal to "even look at bad material on the web whereas at the moment it is only illegal if you actually download that bad stuff". (paraphrased)

    More total misunderstanding. To view a web page, that page is downloaded so that the browser can display it. Looking at a web page IS downloading that web page.

    Otherwise there would be no impact on an ISP's download allowance even if a user looks at 10 million web pages per day.

    1. Indeed, see also the way that the child pornography laws are already abused so that anyone viewing dodgy material on-line can be convicted of the greater offence of "making a copy of an obscene image", an offence intended to be used on the printers and publishers. (But it's only child porn addicts and you're not in favour of THEM are you?)

    2. No, I am not saying I am in favour of someone, nor indeed anyone, looking at child porn. My point is that a senior UK law-maker is today - and yet again - displaying a further total local of understanding of the topics on which she is attempting to legislate. It is not just MPs (junior, senior, otherwise) who mis-understand "downloading" vs. "looking at at a web page". I have this conversation with various people who exclaim "I don't download anything, so why is my download usage so high?" I then explain that every web page looked at *IS* a download. They generally refuse to believe me, "becuase I don't download anything. I only look at web sites from time to time". That's fine for the Man In The Street to mis-understand. He/she can rely on expert consultants to correct him and help him understand. But it's absolutely not OK for a law-maker to legislate based on such catastrophic total lack of understanding of such basic principles.

    3. The issue is whether s58 Terrorism Act 2000 covers streaming. The wording is "collects or makes a record of information", and at least some feel that it is not clear that this covers the streaming of information, where no file remains of the user's machine at the end of the session.

      I think there is a degree of ambiguity there, and, with a goal of removing any argument on that point, I can understand why a clarification might be desired.

  5. I'll sneer at stupid politicians all I want thanks, particularly because even after one has spent time educating them, they still do not listen.

    They do not want to be educated on say, end-to-end encryption, or their idiotic Internet spying schemes. They only want someone to tell them they're right.

    And seeing as they don't understand the first thing about the technology they're attempting to regulate, it's not.

    Maybe I'll run for parliament in a few years, under the campaign of being the first technologically competent MP who understands the Internet and online security.

  6. First, it's good to see that this discussion has not fallen for the "they want to ban encryption!" strawman, which seems to be doing the rounds yet again.

    As I understand it, at least, the discussion is on a relatively narrow point, which is whether providers of massively popular over the top messaging applications should be required to design their systems in such a way that they either do not offer end-to-end encryption (but do offer some other approach to encryption, which can be removed centrally), or else having the ability to switch a user from end-to-end encryption to removable encryption, on receipt of an interception warrant. Depending on architecture, there may also be an element of "stopping peer-to-peer routing so that the operator can do interception", but I don't think that that aspect has been raised.

    There has been, as far as I know, absolutely no talk of "banning encryption", stopping banking websites from using TLS and so on.

    Some of the points made here stand: someone who wishes to do so can use an encryption scheme entirely unrelated to the underlying application, such a GPG, or a one-time pad. Conversely, buying a pair of gloves is trivial, yet some criminals still leave fingerprints behind: perhaps not everyone "of interest" will adopt different technologies immediately, which, if true, would mean that there was still some benefit to the change in architecture. The outcome may not give government perfect access, but it may give "good enough" access for its needs, or simple "better" access than it has already. This may be sufficient to justify the measure.

    Similarly, points around user security / safety stand: if the removal of e2e on a holistic basis, or the introduction of a capability to remove e2e on a targeted basis, were introduced, users' communications would be less secure (e.g. against foreign state attacks, on a centralised encryption system) than if e2e were in place.

    An un-made point too is the impact on the architecture, and the cost/viability, of the operator's model: if the requirement reduces scalability, or increases technical complexity, there is a cost to the operator as well as to each user. Potentially, some of these costs could be quantified, and contributions made by the government, but impact such as slower time to market is much harder to quantify and recover.

    Even with the debate construed correctly, it is not an easy one to resolve, despite numerous opinions. It requires an assessment of the proportionality of the measure in question and, outside government, no-one is well-placed to do that, because of informational inequality: those outside are unable to carry out any of the steps of a proportionality assessment in a meaningful manner. Whether the benefit outweighs the harms, who knows.

    But much of the online flapping about the "banning of encryption" seems, in my opinion, to be a missed opportunity to discuss the actual topic at hand.

    1. Politicians have always insisted that they don't want to ban encryption, but their intentions have been extremely unclear because what they say they want often directly contradict the promises they make.

      For example, "we won't weaken encrypted services" is often put right alongside "but police must be able to read messages that criminals are exchanging through those services". Clearly you can't achieve both of these things at the same time. Politicians are never going to be trusted if they insist on using this kind of unclear double-speak.

      From the article: "She insisted she does not want "back doors" installed in encryption codes, something the industry has warned will weaken security for all users, nor did she want to ban encryption, just to allow easier access by police and the security services." - admittedly this is the BBC paraphrasing her rather than a direct quote, but I would argue that banning e2e encryption on certain services is no different to "installing backdoors", "weakening security" and "banning [a type of] encryption", and it just isn't possible to "allow easier access by police" without doing exactly that.

      However, to my mind the most concerning thing is that legislators are comfortable standing up and saying "I am legislating about a thing I don't understand and I don't want to understand the things I'm legislating about. I'm tired of the people who do understand telling me that what I'm trying to do doesn't make sense."

      It is all far too close to Gove's "I think we've heard enough from the experts" comment - people who aren't experts need to stop telling the experts to shut up and actually listen to them. You wouldn't try to operate on your own cancer because you'd "heard enough from the doctors" would you?

    2. Oh, I don't think politicians have necessarily done themselves any favours, and it is difficult for technical experts, or just people with an opinion, to weigh in on something when they do not have the detail of the problem to be solved.

  7. There is a logical fallacy at work here, which is to assume that because a measure designed to promote some good (or at least assist the security services) can be circumvented in some way, then there's no point doing it at all.

    From the article: "... any attempt to control it will not impact that threat..."

    You've no way of knowing that's true, and it's quite probably false. In just the same way as locking your house is not pointless merely because someone could break a window, making would-be terrorists find it *harder* (note, I agree, NOT impossible) to communicate secretly may have a number of benefits:

    * Their use of home-brewed encryption is more likely to be flawed
    * Their use of home-brewed encryption is more likely to be noticed
    * They may not actually have the knowledge to do any kind of encryption if it's not handed them on a plate.

    Of course there's a trade-off, and some kind of ban might well do more harm than good, and that needs to be weighed-up. But you won't be taken seriously making arguments if you stick to the trope that the only two levels of anti-criminal effectiveness possible are 0% and 100%.

    1. OK, I'll bite. Yes, I cannot say for sure that they will change to using better encryption. Just as you cannot say that their use will be flawed or noticed.

      And yes, a trade off. Such measures may thwart some would-be terrorists that are not smart enough to find someone smart enough to help them. I agree. But the trade off is the damage that such measures do. Again, hard to predict for sure, but the daily impact of scammers and spammers and hackers, and people trying to get in to networks (have you ever looked at a firewall log) shows the scale of the other side - the bad actors that would love to break networks where the encryption is weakened to catch the odd stupid terrorist. IMHO it is far from worth the trade off.

      As for whether terrorists would change though, clearly they have. They have started using apps that have encryption already, else there would not be the claimed issue with reading messages. They are not just sending letters or making old fashioned phone calls or even sending old fashioned text messages. They are installing apps, and using them. If one of the apps is compromised and another is not, someone, somewhere, in any group that is doing "bad things" is going to say "maybe we should change to using that more secure app".

      Bear in mind that apps from some company are not the only things going, but they are pretty much the only thing you can control.

      There are open source apps, especially on less controlled platforms like android, created by lots of separate people and not centrally controlled by some company you can target with a law. The law has nothing to put in its cross hairs apart from the users (i.e. the very terrorists that ignore other laws already).

      These apps are going to be good, not some iffy "home grown" stuff. You don't need to "home grow" the maths any more, it exists and so does the code. But privacy activists will make good quality apps, and do make good quality apps, that are not flawed (or fixed quickly when they are).

      Those same apps will be increasingly hiding their tracks because of police states and oppressive governments so you cannot spot them. Bear in mind they are not banning encryption, so encryption will be "seen on the wire" all the time, and you cannot tell what it is (that's the point).

      Even so, I have read that groups like ISIS already have apps anyway.

      So, yes, some criminals will remain stupid. Yes, some will be caught if back doors or weakened encryption is implemented.

      But look at the numbers?

      How big a threat are terrorists anyway? Is it worth doing *anything* if it has any cost or negative impact at all - almost certainly not if you actually look at the evidence and numbers. We would save more lives and reduce hardship more by simple measures in road safety or better designed bath mats.

      How big a threat are hackers - again, we can see so many cases of companies being hacked to get data, many per year, and we see the millions of attempts to hack machines and spoof things and defraud people. We can see that this is a big target, and we need the best tools possible to defend against that real threat without hindrance.

      Just consider the numbers, the evidence, even if some legal measure was 100% guaranteed to stop every terrorist attack ever again, it is not worth it if it weakens security for you and me and everyone else in our every day lives, is it? Surely?

    2. If you've now conceded that there is, in fact, a trade-off, and that there *are* some benefits that need to be weighed against some costs, then my previous post has been successful.

      I'm not going to argue about what the correct outcome of the trade-off would be - we quite probably agree on that anyway.

      But you will have an uphill struggle to persuade anyone who makes their living by making decisions in the public eye that their world would be a better place if they ignored the terrorists and concentrated on bath-mat safety. There is just no equivalence between a bomb at a concert full of young people and a nursing home full of nearly-dead-anyway people eventually succumbing to their deadly bath-mats. People pretending the two were the same just because the number of corpses was the same would not be taken seriously.

    3. "There is just no equivalence between a bomb at a concert full of young people and a nursing home full of nearly-dead-anyway people eventually succumbing to their deadly bath-mats."

      How about the two biggest killers of young people: road traffic crashes, and suicide.

      Between 2006 and 2013, there have been about 18,000 road deaths and 44,000 suicides. During that same period there have been 2 terrorism related deaths, one of which was the terrorist themselves (Glasgow Airport), and the other, I would say was not terrorism - it was murder (Lee Rigby).

      Of course, terrorist plots have been foiled over that time and without the security services doing their job there would likely have been more deaths. But the fact that terrorism causes 0.003% of the deaths compared to the two biggest causes of young people dieing tells me that the security services are doing fine and don't need any more resources (including new legislation).

      Of course, it isn't politically expedient to say "we're going to fund road safety and mental healthcare instead of pouring more money into anti-terror" because the press *and* the politicians make such a big deal about terrorism.

      By the dictionary definition of terrorism (trying to achieve political aims by inciting terror), the real terrorists are the politicians - they are the ones who keep saying "if you don't let us take away your civil liberties you are in real danger of terrorists killing you", when in fact that risk is almost immeasurably small.

  8. Cynically, I wonder if the whips have been abusing lawful intercept provisions to spy on MPs' communications, and are now stymied because (for example) Boris Johnson now has his disloyal chats over WhatsApp[1] instead of SMS?

    It would explain why they're so keen on banning E2E encryption, and why WhatsApp keeps being brought up.

    [1] http://news.sky.com/story/boris-johnson-urges-mps-to-back-may-in-whatsapp-message-10912405

    1. There's quite a problem indeed if a whip can get an LI warrant signed off and served on a telco.

    2. If my cynicism is warranted, then I'm expecting the whips to either get the Home Secretary to sign off on the warrant, *or* have found a route to get warrantless intercept working.