2012-10-20

Returning to OSPF

Now that the IPEXPO is over, I can get back to work, and to OSPF. It seems (from discussions at the EXPO) that people would indeed just expect OSPF to be in a router. This is what we expected, to be honest. We have sold boxes to people at the lower end (SME) where OSPF is not needed or expected. We have also sold boxes at the higher end to ISPs where BGP will usually work just as well for the purpose (announcing connected L2TP routes in to the network).

In A&A we just use BGP and OSPF was not actually necessary. It is clearly nice to have though and other ISPs that have used the FireBricks for some years now, using BGP, have expressed a preference for OSPF because that is how they normally work, which is fair enough.

I have broken the development down in to three key steps:-

1. Hello: The hello protocol is used to find neighbouring routers on the network connection (subnet or interface) and establish which routers are the designated router and backup router. We have this stage working now, and it is more than it sounds as it includes all of the packet header processing and generation and authentication both ways.

2. Database exchange and update: This is where the router communicates the current database of link state attributes, and then continues to update the database as changes happen. Each OSPF router has the same database of LSA records. It may originate some of these records itself, and the rest of the records are from other OSPF routers. Some of the records it originates are as a result of the protocol itself (i.e. a designated router originates the subnet on which it exists) and some may be from the routing table such as connected L2TP session routes that the router has internally. It communicates the records to its neighbours to ensure every router is up to date with the same set of LSAs. Making this work is what I have to work on next. There is a lot of detail in the RFCs on this process, and it will be a bit of a slog to get through, and especially to test.

3. The LSAs result in a database that contains topology details as well as other (external) routes. The topology describes how all of the OSPF routers are currently interconnected, and also the logical cost of each link. Using this it is possible to make a tree starting from yourself and working out the shortest path (by cost) to every other node in the network. From this tree one can make routing entries that are the routes that the LSA database describes. For each of these routes the shortest path next hop can be used to actually install and update routes in the routing table.

I did have to make an architectural decision here. Basically, the LSA database could have been directly incorporated in to my existing routing structures, like BGP routes are now. This routing structure already works out best routes based on logic defined in BGP and could be extended to understand the OSPF rules and link in to the shortest path logic.

However, the decision I have made is to work in much the same way as other OSPF implementations - where the LSA database is separate and part of the OSPF function. This makes slightly more sense as the LSA database is different in structure and content to the existing routing. The normal routing works on a record relating to a prefix, and within that an ordered set of possible routes with the best at the top. Updates relate to the prefix itself, and ensuring the prefix best route is updated to neighbours and the forwarding table. For OSPF the LSA records can all independently need updating regardless of whether they happen to be the best route for a prefix, so a simple list of all of the LSA records is more logical. It also has to periodically send updates to routes to enure they do not time out, and handle individual LSAs timing out or being withdrawn independently to the prefix itself or its choice of best route.

By separating the LSA database it also makes the coding stages distinct. I can do step 2, to maintain the LSA database and communicate it with neighbours, totally independently to step 3 where we pick the shortest path and update live routing. The existing code allows me to track routing updates from other sources (e.g. L2TP) and feed changes in to the LSA database as records we originate. It obviously allows OSPF to inject and maintain the routes that step 3 creates back in to the live routing tables. These interactions with the core routing can be logged and debugged carefully and even simulated to test OSPF code without upsetting the rest of the system.

So, the plan is to work on step 2 and step 3 next week. Once I have OSPF working in some sensible way it will be added to alpha releases to allow testing and comments from customers. I am still somewhat busy, but getting OSPF working is quite high on my list. Anyone wanting to try the OSPF code should get on the testers mailing list (see lists.firebrick.co.uk).

No comments:

Post a Comment

Comments are moderated purely to filter out obvious spam, but it means they may not show immediately.

Missing unix/linux/posix file open option

What I would like is a file open option for "create replacement file". The idea is that this makes a new inode in the same mount p...