I don't know if the RSA SecureID key fobs use the same algorithm as OATH based keys (as per the picture) but they certainly use the same principle. There is a secret of some sort that exists in the key fob, and also exists somewhere else such as in the server to which you wish to log in. This secret has to exist in plain text because it is the input to a function which uses the secret and the time of day to make a number (typically 6 to 8 digits, typically every minute).
This has security implications in itself. You see, with a normal password it is possible for it not to exist on the server - instead you can (usually) hold a hash on the server. You make a one-way hash of what the user entered and see if the hash matches, but you have no way to work out from the hash what password would match. It essentially means that (in principle) the server does not have to hide the hashes - it could publish them. That would, of course, allow off-line dictionary attacks and the like, so normally they are kept secret, but in principle they are less of a security risk.
With the one time codes like secureID the secret used is the input to a function, so cannot be a one way function hash like this. It could be encrypted but the software on the server that checks the code must have the encryption keys as well. Obviously, at one end, in the key fob itself, it is typically physically secure in that the device has no way for the secret to be extracted. Some devices may even have hardware fail-safes that wipe the memory if someone tries tampering with it. But at the server end the key is not inherently secure.
Now, one approach is to use a separate validation server. When you log in, the server you are logging in to asks a separate server to check the code. This has the advantage that the validation server can itself have some physical security rather than just a normal server with a hard disk. It can simply confirm or deny a code, and even count how many wrong attempts are made to stop brute force attacks. I suspect this is the sort of expensive box RSA would sell to the banks for their end. Obviously that communications has to be secured and authenticated somehow.
The trick then is how the new key fobs get read in to the server. I dare say there are ways that can be done with suitable encryption, but it is a security issue. You don't want anyone to ever see the secret, and you want it to end up only in the two ends with no risk of being leaked, or written down.
Of course, one of the nice things that this whole system does allow is the fact you can have one key fob for multiple systems. The down side is every system must have the same secret in order to work. So using these on linux boxes and the like with a simple config file gives all of these same security risks if someone finds the file of keys. Using on multiple servers means one server compromised could lead to others. Using a central authentication server creates additional points of failure and communications redundancy issues.
I have to say this sort of lends itself to a product opportunity - the server itself, perhaps using RADIUS, and having a way to exchange keys securely with whoever makes the key fobs, but having some inbuilt physical security against attack and loss of secrets. Sounds like a fun project :-)
As for the news article, I wondered if the secrets have been leaked. Someone would not know which accounts they are associated with. It would allow someone to deduce which key is in use if they see someone's code. Maybe they would have a few to choose from if there are millions of keys, but see two codes and you'll have it. It would mean that the previous ephemeral code from the key fob becomes a security risk. It is not clear exactly what happened in this case from the news article, but clearly a problem. It also sounds horribly like the keys are not just in the two places then need to be - the key fob and the authentication server... You have to wonder.