First, the notion that you believe that I think that DL can do no wrong is not mine; I haven’t said it because I don’t believe it. Perhaps the real issue is that DL is really doing a pretty good job at running an airline and others’ “favorite airlines” are not.
.
So, while there are solid metrics that can be used to measure the performance of airlines, let’s look at them and the notion of a statistical approach to customer service to see if it all stacks up.
.
Let’s look first at the notion of a customer service. About two decades ago – which incidentally coincides with when US airlines had to report greater and greater pieces of data to the US DOT – a couple of professors decided they would come up w/ a “model” which they think takes a bunch of unconnected metrics and come up w/ one number which they can tack on each airline, using a weighted scale to come up with their idea of a score for airline quality for each airline. But a couple questions should be asked:
1. How are denied boardings and on-time even connected in the eyes of the passenger? Who really makes the assumption that an airline’s performance in one customer service metric is indicative of the whole?
2. Which raises the question of who decided on the weighting of each of these metrics…. For those of you who care, here is what the authors say about how they came up with the weights:
“Weights were originally established by surveying 65 airline industry experts regarding their opinion as to what consumers would rate as important (on a scale of 0 to 10) in judging airline quality. Each weight and element was assigned a plus or minus sign to reflect the nature of impact for that criterion on a consumer's perception of quality. For instance, the criteria of on-time arrival performance are included as a positive element because it is reported in terms of on-time successes, suggesting that a higher number is favorable to consumers. The weight for this criterion is high due to the importance most consumers place on this aspect of airline service. Conversely, the criteria that includes mishandled baggage is included as a negative element, and is reported in terms of mishandled bags per 1000 passengers served, suggesting that a higher number is unfavorable to consumers. Because having baggage arrive with passengers is important to consumers the weight for this criterion is also high. Weights and positive/negative signs are independent of each other.”
Ok… so a group of airline industry experts sat down and decided that OT is far more important than the ratio of denied boardings… except that any passenger could care less about denied boardings if they miss their connection and vice versa.
3. Neither the government or our brainy professors and their cadre of “experts” apparently couldn’t figure out how to throw in weights for network carriers and the complexity of their operation (ie was a passenger bag lost after a single nonstop flight or a doubled interlined international connection). Anyone who understands the airline industry knows that airlines are not the same and don’t serve the same mission (actually most consumers get the concept too). Even though the government requires airlines to report how many delays are due to each reason – including air traffic control which is beyond the airlines’ control, the authors don’t make much attempt to factor those kinds of delays in their formula… a delay is a delay is a delay. Somehow Hawaiian and Skywest’s OT performance doesn’t look quite as stellar if B6 gets credit for the ATC delays it faces… and remember, the government not the airlines set the numbers of slots that operate at key airports. Likewise, a customer service complaint for discrimination (which precious few Americans ever encounter) is calculated no differently than a flight problem which is the largest category of complaints by airlines at the DOT.
SO… despite the fact that those very real questions have not been addressed in the 21 years that our esteemed docs have been assessing airline quality, have a look at their formula…
(+8.63 x OT) + (-8.03 x DB) + (-7.92 x MB) + (-7.17 x CC)
AQR = -------------------------------------------------------------------
(8.63 + 8.03 + 7.92 + 7.17)
So, that said, let’s take a look at the components that make up the perception that there is a unique number that can be assigned to customer service.
By far the most significant number is on-time… and as people on this board have pointed out in many places, the number is still not entirely accurate because airlines can separately report OT performance by each carrier… which is very difficult to nail down when you consider that several regional airlines fly for multiple network carriers. Considering that 25% or more of domestic ASMs (not even counting flights) are operated by regional airlines, how meaningful is an on-time number for one part of a carrier’s operation important when the rest is not counted. Add in that int’l flights are not counted nor are mechanicals and all of the sudden the OT number becomes pretty worthless pretty fast.
First, all of the reported airlines had an average OT of 80%.... you have a 4 in 5 chance of your flight arriving at the gate (minus the exclusions noted above) OT. But when you look at OT by carrier, you find that most of the carriers are scattered within a 3 point range of 80%.... 2-3% by most statistical measures is not large enough to really say that one airline is better than another, esp. when you consider that OT changes pretty significantly by month and from year to year. While normally reliable Skywest and Southwest got high marks for OT for the year, the wheels fell off their operations in December 2010 when they were in the mid-60s, 12-15 points lower than the much larger network carriers… what gave… and how is a consumer supposed to take a few aberrant pieces of data to be able to make a meaningful purchase decision.
Looking at the network airlines as a group since presumably they should have somewhat similar nationwide networks, you will find that DL’s 77.4% OT is just 2.5% points lower than average – which is exactly where AA’s OT was and almost the same as WN’s. Industry leading (for the mainland) UA was 5 points better than average… which was 8 points better than DL. So the biggest range of OT was between DL and UA… all things being given most people might consider OT statistics when you are talking about an 8 point difference.. that would be statistically significant. Problem is that UA and DL don’t have the same type of netrwork… CO and DL do and their OT is within a couple points….
But look at B6… it had the lowest OT of any large jet operator.. and yet people love their service and their prices (even if other carriers match them)…
which raises the question again as to what statistics really matter. B6 has defined its existence by flowing into delay prone NYC but offers a product that is as good as or better than other network carriers (even though B6’s largest competitors have upgraded their domestic product to significantly narrow the gap between themselves and B6).
.
So the real question that appears to be missing in all of this assessment of airline quality is a POSITIVE metric – something that says how well customers like the service they receive… not how many complain….
Read on.
Involuntary denied boardings – one of the most frustrating passenger service experiences – is measured on the basis of occurrences PER TEN THOUSAND passengers…. Even though the average for the industry is slightly over ONE involuntary denied boarding per TEN THOUSAND passengers boarded, there is a more significant statistical variance of the data than with OT. DL and FL have some of the lowest invol rates in the industry with FL slightly lower than DL (excluding B6 which doesn’t overbook at all yet they obviously do generate some invol DBs – apparently due to weight restrictions or equipment downgrades). The network carriers are pretty well stair stepped at multiples of DL’s invol rate – AA is 2X, UA is 3X, US is 4X, CO is 4.5X…. large jet worse carrier F9 is 2.5X the industry average… but the question still has to be raised.. how does this metric affect purchase decisions – and no one can really say other than a couple docs who “asked some experts” for what they though mattered.
.
The same type of arguments can be made to mishandled baggage as apply to the previous two discussed… AA and DL have the exact same rates, just a bit below average. UA was a bit higher but hardly anything that would make a person change his purchase decision…. US and CO did fare better than their larger network carriers but at least only by about 25%.... significant to the average consumer? Doubtful
.
So then we get to consumer complaints to the DOT. For years in the pre-deregulation and early regulated era, DL made it a corporate goal to have the best DOT complaint ratio in the industry… for whatever reason, that changed because they haven’t won that title in decades. But let’s look at the numbers. DOT complaints are measured in occurrences per 100 THOUSAND passengers. The average for the industry in 2010 was 1.22 complaints per 100 THOUSAND. Even passengers. So 12 passengers in a million were unhappy enough with their airline experience to complain to the government. But how to the airlines compare with each other? Even though DL with 2 complaints were 100, 000 passengers had the highest complaint ratio and the worst in the US industry, AA was at 1.44, CO a tad higher, and US and UA around 1.6. So the difference again between DL as the worst network carrier and AA as the best amounted to .complaints per passengers boarded…. or 55 more complaints per 10 MILLION passengers – less than what DL boards in one month. Something also is noteworthy about customer service complaints to the DOT… even though regional carriers fared worse than their network brothers in many statistics, OT, invol DBs, and baggage handling, when it comes to DOT complaints the network airline statistics are worse than their network peers. Maybe the public really does understand that when it all boils down to the bottom line, the network carrier is responsible for its regional carriers. And maybe because DL has the largest regional jet partner system in the world, they get dinged for the actions of many of their regional partners…. Sure DL should exercise oversight… but maybe we should be asking why DL employees are not handling their regional carriers most closely.
But even if we assume the profs’ whole process of coming up w/ a single customer service is possible, then how do DL and other network carriers (DL’s peers) rank?
The average for the entire industry using the profs’ methodology is 1.20. Among network carriers, DL ended up slightly above industry average at 1.22, bested slightly by US at 1.17. AA came in only a tad worse than DL 1.28 and UA was at 1.31. Despite being perceived as one of the highest quality network airlines, CO was behind DL but only by 1/100 of a point.
Not surprisingly, low fare, non-network carriers fared better. FL came in #1 with a score of 0.48 but they shouldn’t get too comfortable with that score because their new parent WN only managed a 1.01, bested by B6 and AS.
The worst airline in the rankings? American Eagle at 2.82. At 1.72 even Atlantic Southeast’s rankings look stellar.
So what did we learn from this little thesis?
- Most customer service metrics in the airline industry result in very little differences between each of the carriers.
- Most of the metrics measured except for OT affect a relatively small number of customers.
- Except for OT, most of the metrics are negative metrics – meaning that you really don’t know how GOOD your experience might be for the vast majority of customers who find their airline experience acceptable to even good.
.
Customer service rankings, while every airline CEO will tell you are a focus, obviously don’t dictate the future of companies. Network carriers do not really fare that much worse than some of the low cost carriers.. and the low cost carriers miss some of the metrics by a significant amount compared to other carriers.
.
Since there really isn’t a molehill’s worth of difference between the quality of most US airlines, maybe what really does matter is how well you run the airline which is measured by financial numbers and which are significantly different among the industry.