Monday, 16 January 2017

Teaching us to suck eggs? BT?

We have a customer on a fibre to the cabinet (FTTC) service which has packet loss.


The red is loss, as measured by one second LCP echoes over the PPP link, and is often over 5%. Levels of random packet like this are severely impacting his ability to make use of the service.

The loss started at the exact moment BT did work on the circuit due to a major outage, so is clearly related.

This is the line before the outage, and you do not get much clearer than that - no loss. Same as every day before :-


This is the day the line came back after their major outage (which lasted two days) :-


And this is the next day, which looks much like every day since :-


It does not take a rocket scientist to see there is a problem there - periods of around 5% loss, sometimes more, most of the day, every day, since the outage.

And yes, that is start of OCTOBER 2016! BT have failed to fix the fault for that long!

Today we got this, and I am almost at a loss for words! Talk about teaching us to suck eggs!

Is the customer using a VPN? Data is transmitted in discrete units known as packets. When a network server is overloaded, these can get discarded. 
This is known as packet loss and results in slow loading game dynamics and graphics, or the unsatisfactory performance of a VPN connection.
In such circumstances, there isn't much which can be done to improve matters, as the cause is not associated with your PC or broadband service.

I'm really not happy about this, but the "there isn't much which can be done to improve matters" is just shocking. We have asked BT to confirm if they are stating, officially, that 5% loss on an idle line is considered "acceptable" for a GEA/FTTC service - we await the response.

They even go on to say :-

In the meantime can you ask the customer to run some traceroute and provide and this hopefully will aid us in seeing where in the network  the packet loss is occurring.
SPs engineers can use a "wire shark" which can detect packet loss at points in network.

This is after explaining that we can see the loss at the LCP level on the PPP link and providing access to and copies of graphs showing the loss over and over again! It is like dealing with Dory to find a fault called Nemo. We keep having to repeat ourselves.

There is one other small snag.

We are all used to the notion of "fibre" broadband not actually being "fibre" which is why this is "Fibre to the cabinet". BT sell this to us as "Fibre to the cabinet" and call it FTTC. It turns out this line is in fact "Microwave to the cabinet". A good idea, normally, but not as described, and clearly beyond BT to actually understand and fix.

This just highlights the problem with a clear definition of the service: We need a clear specification of levels of idle/random packet loss, idle latency and jitter, reliability/resyncs, min sync speeds up and down, and even throughout before loss/latency starts. Without these you can literally spend months bashing your head against a brick wall and having engineer after engineer sent (each potentially costing around £200).

Saturday, 7 January 2017

Traffic management in A&A

A&A do not do much in the way of "traffic management"!

This was somewhat brought home recently when someone tried to sell Alex some DPI / traffic management system over linked-in seeming to think A&A would need some.

What he was selling was DPI (Deep Packet Inspection) systems that can "manage" various types of traffic. As he explained, this could "throttle" peer to peer traffic.

What was amusing is Alex tried to explain that we don't need that - one would only need such things to manage a congested link. We aim not to be the bottleneck and so not have congested links. This is hard work and there are always occasional exceptions from time to time, but the plan is that we have enough back-haul to carriers, core network, and links to peers and transit that normally we are not the bottleneck. Basically, we should not slow down at peak times.

Alex's tweet (here) showed the exchange, where the salesman did not quite understand how we work. I am pleased at the number of comments and retweets appreciating our stance on this.

From the discussion it is worth mentioning a couple of exceptions to the rule.

1. Denial of service attacks are where so much data is sent to a customer they have an unusable Internet link anyway. We take action in such cases not only to help the end user in question but everyone else on our network that could be affected. Such traffic is far from "normal" usage and not something our customer has asked for. We always reserve the right to protect the network as a whole. Thankfully this is rare.

2. Where the link to the customer is congested because of the capacity of that link to their line - here we do do some extremely "light" traffic management in that larger packets are dropped before smaller ones. We have to drop packets if the link is full! This is a very simple metric and needs no DPI. Large packets are a feature of bulk data transfer like TCP, which can adapt and slow down, but smaller packets are more likely interactive or VoIP or DNS which cannot adapt. This level of management, which we allow customers to control, allows VoIP to work in the face of large downloads. We offer customer options to manage this, so you have control.

Basically, that is it.

We have no need for Deep Packet Inspection traffic management. If someone wants to P2P filling links, then they can. Our tariffs all have some level of usage cap, even if in the terabytes, so if someone is "taking the piss" they will hit limits. Even so, with 1TB and 2TB monthly usage packages now, we are pretty accomodating even with non stop streaming video.

At the end of the day we should not care what you are doing and do not need DPI based traffic management systems! Well done Alex for explaining this.

Friday, 6 January 2017

Barcodes

I have been messing with barcodes most of my life - and I don't say that lightly! My first ever commercial software was when I was 15 or 16 and I did some bar code reading software for an RML 380-Z. It involved reading some simple character barcodes, and also EAN/UPC barcodes. All the timing was done in the processor based on a one bit input from a light pen / reader.

I learned about barcodes back then and have been messing about ever since in various ways.

There are two main types of barcodes, though, to be fair, only one has "bars". The two types are 1D or linear barcodes, and 2D barcodes. It is really misleading to call the 2D codes "barcodes" to be honest.

Linear barcodes

There are many types of linear, or 1D, barcodes. They are designed to be read by a wand or laser or reader which looks at a line across the barcode seeing black and white in specific timing or spacing.

Normally these need a quiet zone (usually white) before and after the code, and then have bars and spaces (bars being black and spaces being white) which are certain sizes. Some standards have simply thick and thin for these and thick could be different to simply twice the width of thin. In practice, using thick as 2 "units" and thin as 1 "unit" usually works even in such cases. Some systems have several thicknesses of bar and space, each a multiple of a basic unit size. This maps well on on to simple pixel graphics images.

One of the least efficient and most annoying of these is "code 39". This uses 5 (black) bars with 4 (white) spaces making a total of 9, of which (mostly) 3 are thick and the rest are thin. Thick can simply be twice the width of thin. Code39 allows 40 combinations of 3 from 9 being thick, which codes letters, numbers and a few symbols. The space between each character could be one thin space, or more. There are a set of special codes that are thin bars and thin spaces with one of the spaces thick giving 4 extra characters.

The beauty of such a system is that each character is a self contained sequence, and you can in fact make a font out of it. There are no inherent check digits. Each normal character is the same size. The codes start and end with "*" character. So it is very easy to construct, though very inefficient.


Another simple code that only uses thick and thin bars and spaces is ITF (Interleaved 2 of 5) which only codes numbers, and then only even number of digits. It is much more compact for numeric sequences. A common checksum is the LUHN checksum as used on credit card numbers. Each pair of digits is 5 bars and 5 spaces (interleaved) where 2 of the 5 are thick. This makes 10 combinations for digits 0-9.


We then get a tad more complex where we do not simply have thick and thin, but 1, 2, 3 or even 4 unit widths. The system used for retail product code marking UPC (Universal Product Code) and EAN (European Article Number) allows coding for products using a numeric value.


By using more different widths, this allows more code density. The format has specific additional control fields such as the two thin bars with thin space at start and end and in the middle. There is a standard checksum coding as well. This is coding specific 13 or 12 or 8 digit sequences only.

Another common linear code is codabar 128 - this uses multiple width bars and spaces (up to 4 units wide). It has special coding for pairs of digits to be efficient for numeric sequences, but allows for letters and numbers and symbols. It is probably the most dense and flexible 1D coding that you can use.


Like most systems for linear coding the barcodes all have consistent width (apart from special characters in code 39). This helps allow formatting of a specific number of digits or characters in a specific space.

Two dimensional codes

There are two main standards for 2D codes. These are not "barcodes" as they do not use "bars", instead they use patterns of pixels which are black or white. Both of these include forward error correction using Reed/Solomon coding. This means that defects errors printing and reading and can for many errors. Obviously the technology to read these is different - based on cameras rather than linear pens or laser scanners.

One standard is IEC16022 "DataMatrix". It is quite nice technically. It allows a number of different methods for encoding data optimised for numeric or alpha numeric and so on. It is used on postal systems in the UK quite a lot.


The other common 2D code is QR codes (IEC18004). These are, in my opinion, not as nice technically, and not as compact, but look "cooler" so are kind of winning the popular vote on such things. They have target squares within them that sort of look better. They do have different coding formats for numeric, alphanumeric, etc.


Summary

There are many 1D and 2D coding systems and some clever new colour systems even, and picking the right on is a good idea. You want something compact and with good error corrections and detection. It is a shame so many systems opt for the worst of 1D coding using code 39 fonts though, especially when the data is purely numeric and could be much better coded as ITF or codabar128.


P.S. My card ordering system allows you to create cards with any of the above bar coding systems. The Odeon card is an example.

Thursday, 5 January 2017

URL shorter for barcodes

Some time ago I made a URL shortener site specially aimed at use with barcodes.

The site is http://4.gg/

Today I just changed it to use QR codes rather than DataMatrix codes (IEC 18004 rather than IEC 16022), which meant the codes are now using 16 rather than 13 characters of unique data in them. I prefer IEC16022, but I have to recognise that IEC18004 (QR) codes are more popular.

The idea is the URL it makes is short, but no shorter than needed to fit in a sensible small barcode. Hence in data matrix it is a different size to QR codes.

Using a URL shortener site like this allows for a smaller and easier to read barcode.

We use it internally for all sorts of things, including on ID cards, and invoices, and so on.

The site holds very little data - it has the code to URL mapping, a hit count, and the IP from which the URL/code was created.

I don't think I have advertised it much, well, at all even!

So I had a look today, whilst changing to QR code basis.

To my surprise it has been used to create URL/codes from over 59,000 different IPs. A lot are ours, a few IPs cover hundreds of thousands of URL, and we have over 1.4 million URLs in total, but still, I am quite surprised.

Thousands are youtube links, lots are links to other short URLs.

So I looked at the usage (hits) and the top one has over 500,000 hits and is a 404 not found? After that are the ones on the main 4.gg web site (e.g. BBC) and then a few specific web sites with different URLs, most of which now fail.

Even so, over 3 million hits, not bad for something I never published!

Sunday, 1 January 2017

Happy New Year

I sort of felt that I should post something but now I come to, I am not sure what to say.

My new year's resolution is the same as last year, 5120x2880, which is the resolution of the nice 5k Mac I use :-) I don't really make traditional "resolutions", never have done. If I set my mind to something there is no reason to wait for a new year. Like anyone else getting older, I hope to stay reasonably healthy - but that has been slightly thwarted by nearly two weeks of rotten cold / cough. I really think I am getting over that now, I hope so, so the year should start on a good footing.

I have a great family, and that moves on whether the year is ending or not. Another grandchild on the way - which is great news, but makes me feel even older.

The company (A&A) is doing well - the upgrades we have done over the last year have helped make sure we keep up with ever increasing usage and manage to offer what I really feel are good value packages. Yet more to invest in the coming year with a lot more 10Gb/s stuff happening but happy we can afford to make that investment - I do feel we have the balance right. The company is 20 later this year so we should do something for that!

Politics in in the UK is nearly as mental as the US. OK UK politics is a tad less likely to be world wide armageddon, but still crazy. Our illustrious leader, May, is still suggesting radical moves, like abandoning Human Rights, which is just plain scary. We have DE Bill to contend with, which is also scary. And the IP Act, with all its secret orders, should be challenging.

I did one fun thing last year - I learned a new skill and was tested and approved - drone flying. Not actually done any commercially yet, but it was actually quite interesting just learning something completely new and different at my age. Even so, law on that is probably changing soon as well.

I do plan to try and come up with some sort of holiday plan this year. Oddly things are getting a tad strange for holidays - it used to be like one family holiday a year, but with the kids having their own family now things get more complex. A holiday with my with darling wife, obviously, but probably also a holiday with some of my mates which I did for first time last year (LA and Vegas). We'll see how that goes and what I can afford.

It should go without saying, but always seems necessary for so many people to say, so I wish everyone a Happy New Year. Wishing does not really do anything (much like praying) but saying it does make people feel I am a nice person (which I am, obviously). Maybe that is the cold and/or alcohol talking. Even so, I do hope everyone has a good year...

My latest clever trick is that hard pressing the "send" on iMessage means you can send with bubbles and screens, like "fireworks" (saying "Happy New Year" sends with fireworks automatically). Yes, by so much, I am not the first to know that. But that is what the image is.

I have a busy week ahead, change freeze over, we have a new FireBrick release, and upgrades for that, and loads of other things now that I am allowed to tinker. Still hoping, after this cold, I have my mojo back. Right now I have a temperature, I think, and have had to turn on air-con in the man-cave. Wish me luck (as if wishing worked), ta.

Friday, 30 December 2016

SendNotificationResult

For anyone trying to work out why Microsoft Exchange Push notification message responses are not being accepted by the server, it has taken me a while, but it seems to be that it does not accept a "chunked" response.

We were sending a response from a CGI script from apache, and that is normally chunked.

But there is no way to guess what is wrong. Lots of examples on the Internet, but none worked.

We were sending text/xml with :-

<?xml version="1.0" encoding="UTF-8"?>
<Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
<Body>
<SendNotificationResult xmlns="http://schemas.microsoft.com/exchange/services/2006/messages">
<SubscriptionStatus>OK</SubscriptionStatus>
</SendNotificationResult>
</Body>
</Envelope>
"

I tried every combination of xmlns tagging and all sorts, eventually solved the problem by sending with a Content-Length rather than chunked. That is what it wanted.

Arrrrg!

But I guess it is in line with the rest of the documentation, which is pretty crap (in my opinion), e.g. one page listing a field as URL in one place and Url in another, and we had the wrong one. You just have to guess what is wrong half the time.

Anyway, this blog is for those looking for Wisdom of the Ancients thanks to XKCD



Wednesday, 28 December 2016

BT and their wifi adverts

BT have made some interesting claims regarding their WiFi, with the latest being that it is the "most powerful".

Now this is a rather odd claim as "power" is not something that is all that relevant - power (measured in Watts) is not that helpful as a measure of WiFi, indeed many smaller APs with lower power can (I believe) provide a better coverage and performance. Saying you have the most powerful WiFi is like saying your house has the brightest street light outside it. The main impact being it will make other WiFi nearby worse.

BT have made all sorts of claims before, all of them (in my view) rather suspect. The claim of most powerful WiFi, when WiFi is a radio data system to strict international and national agreed standards, is rather odd. The WiFi will have the power within the standard and legislation, like any other WiFi. It cannot in practice be more powerful.

They even published a document to justify the claim (here).

First issue: "The BT Smart Hub has superior specifications than the routers of all major broadband providers". So only most "powerful" if you ignore the smaller providers. They only look at "major" providers. AAISP have been offering Unifi APs and packs of multiple Unifi APs for some years now, but that does not count as not a "major" provider.

Second issue: The comparison compared many things but not one of them was in fact "power"! They state: "The most important aspect of wi-fi for customers is their Transmission Control Protocol (TCP) throughput.". Whilst this is actually a quite good metric, it has absolutely nothing to do with the claim of being most powerful. Power is in Watts and is not a measurement of download speed.

Third issue: They actually tested the WiFi. This is good as that is what they are claiming is most powerful, but they are selling an "Internet Access Service" using this. The tests are nothing to do with Internet access (you can tell from the speeds they measured) and for most people any speed on the WiFi that is over the speed of their Internet Access is irrelevant, so no help. Yet the advert is to sell Internet Access, not simply WiFi APs.

Basically, they are simply claiming they have a good 3x3 antennae single AP WiFi system they sell/provide with the Internet access system they sell and that it is somehow more "powerful" than other ISPs. They ignore the other (smaller) ISPs selling systems just as good. They ignore those selling multiple access point solutions which are better. They ignore all of the non ISPs also selling this equipment. And they ignore that the actual "power" is the same on these devices and their claim of most "powerful" is not actually about "power" at all but TCP throughput.

Anyway, yes, consumers want an Internet access service that is good. If BT are "most powerful" in that, why are many ISPs (including AAISP) way higher in ispreview's list? (here)