US Airways partly blames legacy systems for March glitch

That was my question. Didn't they do enough testing? Also, why did they implement it during the winter, plus spring break? Ice storms are common in March.

The lack of testing was the issue last spring when the "new" website was brought online.

Someone high up is being snowed by the IT boss (or perhaps the highups are being blinded by koolaid colored glasses). Lack of testing IS NOT the right way to do things. Any IT person with half a brain can tell you that. Granted, you can't anticipate every single problem before going live, but you should be able to prevent major issues such as what's happened with these two events (new website and SHARES migration).
 
What's ironic is that Sabre is pasting a Qik style overlay on top of their current green screen product and calling it a next generation system. Don't believe the BS. It's still the same old 40 year old mainframe technology under the covers.

At least Shares isn't pretending to be a perfumed pig....

The underlying stuff works. All the time. Which is something that's often lost on some. I'd sooner have an overlay over sabre than the crap that US has today.
 
It's kinda like the IBM AS400, I mean how do you spell B-O-R-I-N-G? Answer is, AS400!!! All they do is work! Day in Day out.

But, it has a sexy new name: iSerie....err System i.

I make more with a few of those and a clunky old zSeries box in one quarter than US makes in a good year.

Beery never got the memo that for transactional systems (and hey, paper tickets or not, reservations is transactional)--having a "legacy" backend is oftentimes a positive development, especially when your platform and infrastructure people don't know what they are doing (and, from all account, those in Tempe do not).

Ideally, everyone would love everything to be service oriented and running on open systems. The problem (from where I stand) is that nobody has yet ginned up a GDS that can meet the needs of a large airline (although as mentioned above, they seem to be in the works). The difference is that those are being written by people who have decades of experience at this, not the IT crowd in Tempe.

It's all about knowing one's limitations. In the IT arena, these guys have done nothing except prove that their mouths are clearly bigger than their stomachs since the day the merger closed.........
 
What an excellent point!! I'm trying to think of one single transaction based application that I have run up on in my life that doesn't use some kind of mainframe/AS400 type of computer and I can't think of one :eek:

Because you are an old fart ( :D ), I almost guarantee all the transactional stuff you have seen/done is on big iron. Because I am not an old fart, but one who professes to know this stuff, the same applies for %90 of what I've seen...

I only deal with the paper output so all of the rest of it, I'm not all that astute. When it comes to high volume transactions apparently Windows based systems aren't robust enough?

This is going to border on religious warfare, but no, they are not. This is one of those questions that ultimately goes to the acumen of the architects and engineers who design and implement a system. That said, if I had something that was transactional in nature, Windows would not be the platform that I'd start with. (a disclaimer--I'm sort of paid to think about this kind of stuff from time to time)

I don't even know if SHARES is windows based although given the problems and reliability issues thus far it would be easy to jump to that conclusion.

Here is the dirty little secret that Travis and Joe don't want you to know: the backend of SHARES runs at EDS. On something called TPF (or it did, pre-merger--I don't think that would have changed). The backend of Sabre runs on TPF (or HP's nonstop--but the ass-end is still not on an open system platform). So, in both instances, US (or EDS) has to take the nifty tools from a frontend solution and adapt them to scrape/push data into a "legacy" system. That did not change with the movement to SHARES. A point that is often lost on people.

What did change is that instead of paying EDS to write the tools (or change the wrapper on the tools, as the case may be) that an airline would use to do things like pricing, CRM at reservations/airport/gate, accounting, inventory, and all the other things you would need to poke the backend to do, SHARES allows the Tempe folks to do the same thing, but inhouse.

And therein lies the rub.

Travis and Joe don't want you to know that the backend of SHARES is still a very much transactional based mainframe. The broken pieces are all the apparently dysfunctional chunks that Tempe has created to interface with said mainframe (although I believe they bought at least one of their fare quoting and a few ticking tools from amadeus). And this is the kicker--since it's the front and middle-end stuff that's broken, the backend could be running on Windows, Unix/Linux, VMS, Lisp machines, OS/390, Amiga, AppleDOS, or an Atari 800 and it would not matter a damn versus the status quo.

The frontend stuff is running on Windows, and it's apparently garbage. The backend stuff is still TPF. Now, one might argue that EDS has not put as much effort into modernizing (relatively speaking) SHARES as they have their Sabre offerings (indeed--I understand that SHARES in particularly weak in builtin revenue management functions of all things). However, that should be no cause for the kiosk, for instance, to quit. Or any number of other things.

These guys really believe they are good at this and can scale up their front and middle-end stuff to the big leagues. I think we have our answer on that score....... Joe, my man, should have kept those green screens instead of trying to scrape them into the clickity-clicky-goo garbage ya'll ginned up......
 
If I read all of that correctly and can paraphrase for the average Joe/Jane to comprehend. What you're saying essentially is

What was created "In House" was essentially fine for a third rate regional airline but now with all of the complexities of a true international operation the flaws of sloppy internal code writing are not only showing but are magnified by the scope of work being attempted????

Basically. And a steadfast belief that they really are better off doing it inhouse rather than leaving it to people who know WTF they are doing.

The lack of honest organizational assessment might have worked for HP. It won't work with an airline the size of US. You either pay to have the talent inhouse or you source it to someone like EDS....

That's before I get into the constant habit of trying to blame something/someone else or flat-out lie about what really happened. Take that article for instance--once the data mapping was done and tested, the easiest part of the migration should have been the data dump to move all the PNRs (since you are going from TPF system to TPF system) yet even that got screwed up. And yet they tried to blame it on "legacy systems." It's nonsensical.

This stuff is not easy. It's not nearly as hard as Tempe would like you to believe it is.
 
Basically. And a steadfast belief that they really are better off doing it inhouse rather than leaving it to people who know WTF they are doing.

And THAT is the real issue. When you think you can do it, but don't have the ability AND further don't want to let anyone outside come in and help (granted because they are cheap), then everyone is screwed.

I see this syndrome on a daily basis where I'm at.
 
What an excellent point!! I'm trying to think of one single transaction based application that I have run up on in my life that doesn't use some kind of mainframe/AS400 type of computer and I can't think of one.

I can... eBay and Google (there's two for you). Even a lot of the financial institutions (i.e PayPal) are either already off or moving away from big iron and towards midrange systems.

One of the reasons that AS/400 is still viable today (as is MVS) is because DB2 (the core database) is relational, and they've added hooks to other IBM products like MQSeries and WebSphere. There's a TPF connector for MQ, but it's not exactly simple to integrate and extend.

When it comes to high volume transactions apparently Windows based systems aren't robust enough?

I'll politely disagree. My company processes between 2M and 3M transactions a day on a pure Windows platform, with room to grow.


Clue, I completely understand why people like to rely on TPF, but time to market suffers when you need to make changes (and that's -if- you can make changes). TPF as implemented on the airline mainframes doesn't understand the concept of relational data. Instead, the file structure is essentially a bunch of indexed flat files. Sure, it can process data quickly, but making changes takes about 4x longer than it would on other platforms. Changing the size of a field is a huge undertaking in TPF, especially if you have to reblock file sizes. Doing in in most relational databases is usually a command line entry....

You're also dealing with low level programming languages like Fortan and Cobol (rarely tought in school these days) as opposed to object oriented languages such as C and its variants (widely tought in school).
 
And THAT is the real issue. When you think you can do it, but don't have the ability AND further don't want to let anyone outside come in and help (granted because they are cheap), then everyone is screwed.

The irony in all of this would be that if the analysis that resulted in the "inhousing" (eg, cheap) solution was never run versus the "outsourcing" number, and the damage caused from the poor execution of the former ends up costing more than the latter.

This may be one of those situations where these guys ran past a dollar to grab a dime.

I see this syndrome on a daily basis where I'm at.

I used to, but IT in my organization has been able to drive around a billion in added revenue and/or cost out over the past few years by doing things "correctly." Not cost-driven (per se), but correctly.

I can easily see why the crowd from Arizona misses the point.
 
You're also dealing with low level programming languages like Fortan and Cobol (rarely tought in school these days) as opposed to object oriented languages such as C and its variants (widely tought in school).

The university I teach at is definitely the exception. We still teach a semester of COBOL. We have placed students in good jobs when they graduate because of that skill.

The irony in all of this would be that if the analysis that resulted in the "inhousing" (eg, cheap) solution was never run versus the "outsourcing" number, and the damage caused from the poor execution of the former ends up costing more than the latter.

This may be one of those situations where these guys ran past a dollar to grab a dime.
I used to, but IT in my organization has been able to drive around a billion in added revenue and/or cost out over the past few years by doing things "correctly." Not cost-driven (per se), but correctly.

I can easily see why the crowd from Arizona misses the point.

As has been said a lot about Tempe lately... they know the cost of everything and the value of nothing.
 
I can... eBay and Google (there's two for you). Even a lot of the financial institutions (i.e PayPal) are either already off or moving away from big iron and towards midrange systems.

eBay was (and to a certain extent is) still a big iron shop. There remain some rather large E10ks on their data center floor.

Google, OTOH, is defining their own market and completely reshaping the idea for distributed computing. It's not entirely fair to compare them to anything else, since nobody is doing what they currently are. Moreover, Tempe could never, ever, in a million years come close to anything that sophisticated. They don't have the talent.

One of the reasons that AS/400 is still viable today (as is MVS) is because DB2 (the core database) is relational, and they've added hooks to other IBM products like MQSeries and WebSphere. There's a TPF connector for MQ, but it's not exactly simple to integrate and extend.

Unless the TPF connector is the exception to the rule, it ought to be cake to extend. As for the validity of the 400, the reason why people still build systems around them is the availability and stability. It's obviously not for the nifty things you can do with the platform itself (Although IBM would vehemently disagree).

Look at it this way--LCC Shares sure looks like a bunch of windows boxes scraping data from and pushing it back into the TPF backend. That's it. So when I hear their CIO and the designated liar (Christ) talking about "legacy systems" being the problem, I ask myself "self, what's the real difference between 'TPF/Sabre and some frontends written by EDS' and 'TPF/Shares and some frontends written by LCC'?"

And I come up with "the sabre stuff written by EDS worked." The "legacy system" thing is nothing more than a head fake on their part.

I'll politely disagree. My company processes between 2M and 3M transactions a day on a pure Windows platform, with room to grow.

That's not heavy transaction traffic. That's what, 2k/sec? That's two midsized 400s and whatever is out front. The transactional load is not even into the "think real hard about the architecture" size range if one has mainframe/mini hardware on the backside or even a small to middling open system cluster o' middleware and database.

I'm not saying one can't build a successful architecture around wintel. I just can't think of any reasonable reason to start there (absent the dumbing down of platform admins or any number of other reasons which would drive the TCO up, but if you have to think about that many transactions, you can probably afford someone who knows one of them "complicated" operating systems..)

Clue, I completely understand why people like to rely on TPF, but time to market suffers when you need to make changes (and that's -if- you can make changes). TPF as implemented on the airline mainframes doesn't understand the concept of relational data. Instead, the file structure is essentially a bunch of indexed flat files. Sure, it can process data quickly, but making changes takes about 4x longer than it would on other platforms. Changing the size of a field is a huge undertaking in TPF, especially if you have to reblock file sizes. Doing in in most relational databases is usually a command line entry....

Yeah, but the beauty of it is that you can model all the stuff with a huge degree of certainty, run it first on your QA or Dev LPAR with very little effort, and so forth.

Given a clean slate, I'd probably not start there. But I know (allegedly) what I'm doing, as do my implementation and operation types. Based on the performance of the web site alone, do you really think LCC should be going down the road of rolling their own in a wintel environment?

If they'd stop trying to wrap a dysfunctional windows frontend around a perfectly good TPF backend, they might not be having the meltdown that's gone on for 3 weeks now.

You're also dealing with low level programming languages like Fortan and Cobol (rarely tought in school these days) as opposed to object oriented languages such as C and its variants (widely tought in school).

Good schools require a semester of fortran and cobol. Why? There is money in them there hills. For exactly these types of reasons: Because every bank on the planet still has cobol code coming out it's ears and the vast majority of high performance scientific stuff is still written in Fortran.
 
Clue,

It was a combination of the technology and implementation in my opinion. It doesn't matter any more whose fault it is, it just needs to be fixed. I couldn't read my own notes, so I won't get specific, but they are aware of the issues that exist. As I said, what they do about it remains to be seen.

Did you get any sense of why they continue to try to spin this? Are they truely not aware of the cause, or do they not think the customer knows better? It matters to me as an idicator of what I can expect in the future.
 
Clue, I completely understand why people like to rely on TPF, but time to market suffers when you need to make changes (and that's -if- you can make changes). TPF as implemented on the airline mainframes doesn't understand the concept of relational data. Instead, the file structure is essentially a bunch of indexed flat files. Sure, it can process data quickly, but making changes takes about 4x longer than it would on other platforms. Changing the size of a field is a huge undertaking in TPF, especially if you have to reblock file sizes. Doing in in most relational databases is usually a command line entry....

I thought some more about this, but I'm curious: is your contention that it's generally hard/impossible to make changes or push data from one to the other, or merely harder than it would be in a relational DB? Reason is, you wrote something in the AA forum which may bear repeating:

I think the bigger problem was that both airlines migrated -- EDS created a new partition within Shares B for the combined US, so not only did they lose stuff from Sabre, they also lost some stuff from the HP partition in Shares.

If you have 15 months to plan it, and mainframe hardware to run it on, is it impossible? Or mere incompetence?
 
I thought some more about this, but I'm curious: is your contention that it's generally hard/impossible to make changes or push data from one to the other, or merely harder than it would be in a relational DB? Reason is, you wrote something in the AA forum which may bear repeating:
If you have 15 months to plan it, and mainframe hardware to run it on, is it impossible? Or mere incompetence?

My observation about it being harder to make changes is more a general problem I have with TPF.

Assuming the hooks between data points are already in place, it still takes a lot longer to code, test, and implement. Much of what goes into Sabre, Shares, Worldspan, etc. is still hard-coded, and can't be reconfigured easily on the fly. When I was at AA, it cost us an average of $20K for simple changes to TPF. It was well upward of $300K to do all the post 9/11 programming just to compare the watchlist against a flight manifest.

In an environment like Radixx's or Navataire, comparing the watchlist against a flight manifest is as simple as an inner join or a cursor operation. We wound up having to offload that processsing into SQL running on Wintel to do the comparisons during the time it took for Sabre's programmers to get the functionality loaded and tested into TPF.


Shares as a system isn't bad. It's the same platform CO uses, and I wouldn't consider them a substandard carrier.

As for the point about US tripping over dollars to save dimes, I wholeheartedly agree. It will eventually bite them in the ass, but as long as this quarter's numbers look good, nobody seems to care too much about the next quarter...
 
Native shares is a lot like Sabre. Almost the same entries as Sabre.

It's the QIK over Shares that sucks big time. Very convoluted....

We need to dump QIK and DoUgIe. Both have inflated egos..and neither works very well.
 
The irony in all of this would be that if the analysis that resulted in the "inhousing" (eg, cheap) solution was never run versus the "outsourcing" number, and the damage caused from the poor execution of the former ends up costing more than the latter.

This may be one of those situations where these guys ran past a dollar to grab a dime.
I used to, but IT in my organization has been able to drive around a billion in added revenue and/or cost out over the past few years by doing things "correctly." Not cost-driven (per se), but correctly.

I can easily see why the crowd from Arizona misses the point.
And this is exactly what we told "them" over a year ago. We begged and pleaded because we knew this was going to happen; we were going to implode. THEY WILL NOT LISTEN AND WILL NEVER ADMIT THEY MADE A MISTAKE.
 

Latest posts

Back
Top