SparrowHawk
Veteran
- Nov 30, 2009
- 7,824
- 2,707
It was Cleary.
As soon as I stop chuckling, go back to the pilots thread!!! That' s just wrong
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
It was Cleary.
There are multiple industrial backup generators at the Tempe data center and redundancies across the server and infrastructure platforms including the AT&T data services. The backup equipment is there for loss of power and fail-safe protections. Whatever the root cause of the outage may be, it wasn't because redundancies weren't funded or planned into the infrastructure. This was either a highly irregular event that normal disaster recovery plans failed to identify or else someone failed to do their job properly.
Sorry - I wouldn't know about that, but it seems unlikely that either of those would have occurred because of a lightening strike or facility fire. Just guessing though.Thanks, that's good to know. Just curious do you know if any data was lost or security was breached?
Undoubtedly and obviously the world has at its disposal the tools to over come this situation because of the odds of this occurring and the backlashit wasn't because redundancies weren't funded or planned into the infrastructure.
Did this created the RES outagebecause of a lightening strike
What I have knowledge of, as a visual observer, is that the Tempe data center has backup generators, disaster recovery processes and procedures, and redundancies in critical infrastructure components. I posted these personally-seen observations so as to refute the false claims being made that this problem was proof that US Airways doesn't follow best practices for data center power backup and disaster recovery scenarios. I don't know what the lightening/fire did to temporarily knock down the systems. I further don't know if a person or a system failed to perform as they should have in response to the catastrophic event or if that person or system operates under the direction of USAIT, AT&T, HP/EDS, or some other entity. Once the root-cause has been identified, there it is a near-certainty that remediation steps will be taken to prevent a recurrence.Undoubtedly and obviously the world has at its disposal the tools to over come this situation because of the odds of this occurring and the backlash
To what extent was it could/should more have been done?
You think………..
What are we talking here…your escape hatch sounds all too familiar maybe some oversight and investigation should be involved?
To what extent was it could/should more have been done?
You think………..
Who was or was not paying attention. Who was not monitoring who is going to pay .Who and what is involved in the backup setup
Since you know and spoke to the subject as if you are in the know
OKWhat I have knowledge of, as a visual observer,
After this past week's issue, the question is how much money was lost? Was it more, or less, than the cost to build, implement and maintain an off site data center?
Some years agos, there was documentary showing Sabre's mainframe and backup systems. (Surprising that they let it be shown). The protections are extensive indeed. (Not a bad idea in Tornado Alley).
So I'm having a hard time understanding how this happened even with a lightning strike.