Daley Mayle wrote:
BA blames a power cut, but a corporate IT expert said it should not have caused "even a flicker of the lights" in the data-centre.
Even if the power could not be restored, the airline's Disaster Recovery Plan should have whirred into action. But that will have depended in part on veteran staff with knowledge of the complex patchwork of systems built up over the years.
Many of those people may have left when much of the IT operation was outsourced to India.
One theory of the IT expert, who does not wish to be named, is that when the power came back on the systems were unusable because the data was unsynchronised.
Switch off and reboot always worked for me.
These DR schemes are complex, and frequently require a little bit of "magic dust" from the staff who've been in place for years and know the systems like the back of their hands.
Outsourcing appears fine to the business school types who make the spending decisions, "I saved X millions per quarter" is a real feather in their caps.
It's usually done by accumulating what's known as "technical debt" - taking risks with the backroom stuff.
You could easily compare it to asking "Do we really need a co-pilot on these flights?" or "Do we really need all of those engines? why not switch half of them off until they're needed?"
I've been poking about the industry sites, and there appear to have been 5 significant outages this weekend all citing "Power issues".
I see two possibilities here.
Amateurish PR, a plausible default excuse when they're in deep shit and they don't know when it'll be fixed.
Multiple business cloud hosted at one big site (Not necessarily all of the incidents) - knowledgeable sources are suggesting Capita as a common factor for at least three of them.
The educated differ from the uneducated as much as the living from the dead. Aristotle