## Wednesday, 11 October 2017

### Concrete example of 10th percentile issue

Given OFCOMs idea that one in ten lines are faulty it may help to provide a concrete example of the problem here and explain quite how daft this really is. For example, it is not, as some may assume, the slowest 10% of lines in the country.

A friend of mind has a broadband line, it is close to the cabinet, so the forecast sync speeds on 20th to 80th percentile are 79Mb/s to 80Mb/s. He gets 79.912Mb/s. He has no complaints.

The 10th percentile is for "similar lines" and so BT will have banded lines that are that close together and sampled them and looked at the range of speeds that such lines can get so as to find 10th, 20th and 80th percentile. This means some aggregation. I don't know for sure but this could a band of line lengths 0-500m from cabinet. BT will have done some level of aggregation - we may even be able to find what, but it does not matter for this explanation, so we'll assume 500m line length bands for now.

The 10th percentile is 74Mb/s. This means that lines in that "band", i.e. "similar" lines sync are a range of speeds from below 74Mb/s up to 80Mb/s. Indeed, many would probably sync well above 80Mb/s if the sync was not capped.

One in ten of these lines will get below 74Mb/s - that is the very definition of "10th percentile". Whilst the occasional line will actually have a fault (less likely on such short lines) and still sync at a lower speed, the main reason for being below 74Mb/s will be the line length from the cabinet.

So, assuming this is, say, a 0-500m band it could simply be that everyone over 450m from the cabinet gets less than 74Mb/s, a simple fact that they are a certain line length away. Not something anyone can change.

So imagine such a person at say 490m away is getting 73Mb/s. The line may be perfect. The modem may be perfect. There may be nothing that can be done to make the line better of the sync faster whilst using this technology.

Yet, that person is one of the "one in ten" deemed faulty by OFCOM. They can insist the ISP tries to make the line better, engaging engineering time and effort. They can even insist on a refund. Simply because they are one of the "one in ten of lines" below the 10th percentile for "similar lines".

Now, let's look at their neighbour, who is, say 510m away. They may find they are in a 500m-1km band, and get lumped in with such lines for their forecasts. Being so close to the top end they may be in the top 10th percentile, even though they sync at a lower speed, say 70Mb/s. So their line is not deemed to be "faulty". Indeed, they could find themselves with a 50Mb/s guaranteed minimum, have an actual fault on their line dropping it to 51Mb/s and not be caught by the code of practice.

Please explain to me how this mad system where arbitrary bands of one in ten people at various line lengths (depending on arbitrary choices of "similar" line groupings BT do) are to be deemed to be faulty is meant to actually help consumers? It is not like these people are more or less likely to have a fault, or that they are good candidates for some changes in technology so as to improve speed, or even that they have "slow" lines, they are simply "one in ten".

A graph may help explain...

1. Would it help if instead of defining 'class of line' in this banding fashion, you instead do: for any given line under consideration of whether it's faulty, take all lines which are 500m shorter _or longer_ than this line. Is this line in the bottom 10%. This means that you have to be slower than people 500m longer than you to be considered faulty.

So, for a line 800m long, you consider all in the range 300m-1300m, of which the 800m line if not faulty will not fall in the bottom 10%. The line which _does_ fall in the bottom 10% on that consideration (lets say, 1200m away) isn't automatically considered faulty. Instead we look at all lines from 700m to 1700m and consider that set of lines. It's likely that the 1200m line is not considered faulty either.

This works for most lines. Those in the longest 10% in the country still need to be considered, but I think it would be a start.

1. I have assumed somewhat, largely because ofcom don’t say how it is to be done. What you suggest is an odd way to do percentiles. Normally, however you do it, one in ten people actually fall in to a 10th percentile. This is more a “fault threshold” and can, of course, be massively fudge by making that range bigger. It is not set by ofcom.

2. Indeed I have to say that if you are looking at the 10th percentile of “similar lines” surely that has to be from a set of “similar lines” all of which are in the same set. A sliding set makes no sense surely.

3. well, your suggested 'obvious' implementation clearly makes no sense to the current application - hence the problem.

What I'm doing here is defining 'similar lines' more sensibly. If you say 'within a 500m band' (and that should probably mean '+/- 250m' in my example) then fixing those bands is problematic. If my line is 499m away from the exchange, comparing it only to those closer to the exchange isn't comparing it to 'similar lines'. Surely the line 501m away is more similar than the line 1m away. That's what I'm trying to do.

Yes this scheme is O(n^2) to list all of the 'faulty' lines, but that's not really the objective. The objective is given a line which is possibly performing poorly, can we determine whether it's faulty. To do that you consider all 'similar' lines - which includes those slightly longer and slightly shorter, no matter if you happen to be close either side to some artificial band. If you're not in the bottom 10% of those, you're not faulty.

If you just want '10th percentile lines between 0 and 500m' then sure, this doesn't achieve that, but as a way to determine if a line is faulty, I think it's much superior.

Obviously ofcom need to define what they mean, but I don't think it would be unreasonable to suggest this is what they mean.

4. The consultation even has a diagram of ten people to show how percentiles work. You idea is good but is basically the same as saying X standard deviations below mean for specific line length, or some such.

5. I completely agree that a definition of 'faulty' that will always (by reason of statistics) apply to 10% of lines is ridiculous.

But equally I'd say that classifying a 10m line and a 490m line as 'similar' is pretty ridiculous too (they're 2 orders of magnitude different in length!), and I don't think it's really valid to argue against one ridiculous definition by using another ridiculous definition.

2. It's perfectly obvious why some providers would sign up to this: only 10% of their customers can get out of a contract but they can still point to the code of practice as the gold standard. Many customers don't have any idea about their internet speed and so are happy with a faulty line.

It's equally obvious why A&A don't see ten percent of lines in what BT thinks is the bottom 10th percentile: the bottom 10% is currently set by faults, which are more likely to get fixed on your lines for the normal reasons (competent tech support, competent customers).

Maybe some dreamer at Ofcom thinks that this is a great way to set a moving target and force everyone to improve. The reality is that it makes it harder for 90% of people with faults, and there's not much Ofcom can do when the dominant factor is the line to the customer's house, for which there's only one provider.

3. I am not trying to defend Ofcom -- they need to explain clearly what goal they are trying to achieve, and how they believe this will achieve it. However, I don't think your example shows that the idea is stupid or won't work.

Even with all the assumptions you make, there is a major point I would disagree with: "...a simple fact that they are a certain line length away. Not something anyone can change." On the contrary, that is entirely within the control of BT. They could choose to add another cabinet, or they could dig some roads up and reroute connections for people who are not actually that far away, or they could replace overhead lines with underground which may be more optimally routed or just have less noise, etc.

That is the point I was trying to make yesterday. This new requirement might tip the business case for some particular investment decision (CapEx project or OpEx redirection) from "no" to "yes".

I am assuming that this is not designed to immediately benefit any individual user (except those who are having difficulty getting actual faults fixed), but to drive a general improvement over time. Maybe for 90% of the people in this position nothing would change. But if it causes a new cabinet to be installed in some cases, then suddenly a number of people can get a shorter connection and their speed and reliability will go up.

My guess would be that Ofcom would like to see much larger bands for "similar" (and without exceptions carved out for "difficult" or "historical" cases), so that the initial improvements would focus on those with the poorest connections today.

1. If the objective is a general investment in infrastructure to improve speeds overall, then great - but this is not remotely likely to achieve that. It is essentially picking 10% of people at random who can kick up a fuss with their ISP and cause hassle. That is not a sane way to encourage investment in infrastructure, and does not even target the slow lines, just those slow in a group of similar lines. On top of that, they have said the guaranteed minimum should be a speed test type thing and adjusted for peak congestion rather than a sync speed. The effect is logically the same as the adjusted 10th percentile sync should give a speed test of the advertised minimum with peak congestion, but it creates a point of argument with the ISP, and allows slow servers, slow wifi, slow machines, and many things outside the ISP control to push a line below the threshold even if sync is above the 10th percentile for sync. This means the end ISP is stuck with arguments, and maybe even refunds, whilst unable to actually even report a fault to BTW or Openreach and with no way to get a refund from them. This means the ISP suffers and nobody who is in a position to invest in infrastructure even knows it is happening let alone has any incentive to invest.

4. Something else to factor in is the accuracy of BTW's speed estimates.

They say for my line that the "clean" range is 27.4 - 36.6 (handback is 25). I live in a semi-detached house in the North London suburbs, getting fed from a pole opposite. From the next pair of houses up the road they're fed underground, and the clean speed estimate is 20 - 27.4 (handback of 17.5) - and they're on the same cabinet as me.

So the fact that I sync at 20.0 probably means the speed estimate for my line is wrong. Perhaps replacing the nasty rusted bell-wire isn't the answer then ?