
For Paula Sonkin, September 17th was destined to be a hectic day. That was when J.D. Power and Associates would officially release its seventh and potentially most controversial New-Home Builder Customer Satisfaction Study rankings. This year's results would be the first to report satisfaction scores for every builder measured -- not just those ranking above the average -- in 21 major metro areas, five more markets than the study covered in 2002.
In a tightly controlled release of the findings, first to the winners in each region and then to the public, the new rankings unleashed a frenzy of builder meetings, memos, and press releases around the country -- and a flurry of calls to Sonkin, J.D. Power's senior director of its real estate industries practice.
"Most builders I talked to were delighted and excited" about this year's findings, says Sonkin, who has been a driving force behind J.D. Power's push into syndicated and proprietary customer satisfaction studies in a number of industries, including the new-home market.
Few can dispute the growing credibility builders derive from the independent third-party crown of excellence that J.D. Power bestows upon them. This year's results gave the entire industry a boost, showing customer satisfaction among new home buyers had actually jumped significantly when compared to 2002, rising 8 percent overall (see Happier Home Buyers").
"They have really done a great job of bringing attention to the issue of customer satisfaction," says Larry Webb, CEO for John Laing Homes.
The newest results, however, also created a predictable back draft of questions, reigniting a long-standing debate within the home building industry about the methods and practices used by Westlake Village, Calif.-based J.D. Power -- elements that the public doesn't see.
Critics point to the company's attempt to both administer objective surveys and sell consultative services, which some perceive as a conflict of interest. Another concern involves recent efforts to improve the quality of J.D. Power's reporting by cross-referencing communities mentioned by consumers against builder-clients' customer lists. And there are some builders who worry about J.D. Power's growing dominance in setting industry standards for quality and satisfaction -- standards some builders say they believe don't always align with what actually goes into building a high-quality home.
Perceived Power
In the eyes of consumers, J.D. Power is a name to trust. With 35 years of providing market research information to consumers, from the automotive world to telecommunications, travel, and finance, the company has become synonymous with customer satisfaction. For builders who can attach the J.D. Power name with their own brand, the benefit is clear and unmistakable.
"It gives us credibility," says Alan Laing, Pulte's vice president of customer satisfaction. "In the classical sales sense, it's the ability to differentiate based on actual performance versus your peer group -- and that's all any good salesperson could ever ask for."
Pulte is just one of a number of big builders that have seen their reputations confirmed or enhanced by the J.D. Power annual rankings.
As with any scoring system, however, there are those who question whether the J.D. Power survey provides a true reflection of companies that build great processes for customer satisfaction or just those that study to the test. While J.D. Power declined to break out its revenues, a significant portion comes from selling consulting packages to companies interested in improving customer satisfaction performance.
Thomas Miller, a construction defect lawyer and senior partner at The Miller Law Firm, in Newport Beach, Calif., is one of the doubters. "The biggest issue we deal with is people wanting to know, 'who are the good builders...who is the best?' " says Miller. "There is no consumer report that we think is an unbiased, non-industry supported approach for a consumer to use when making a decision," says Miller. "I have to honestly tell them that the only thing that is out there, that we are aware of, is this J.D. Power survey. We feel it's biased. They aren't eliciting the right information from the ultimate consumer."
Builders, meanwhile, have the opportunity to compare the J.D. Power results from an insider's perspective. Many have their own sophisticated customer satisfaction research programs monitoring the reactions of home buyers in a way, say builders, that provides a more direct gauge of customer satisfaction than broader rating systems like J.D. Power's. "We have a methodology that gives me feedback on a real-time basis," says George Casey, Arvida's president of mid-Atlantic operations. "They're telling me how I did with my customers a year ago."
Admittedly, the J.D. Power syndicated study "is a snapshot in time," says Paula Sonkin. "It is not necessarily meant to be a replacement for ongoing research by us or other builders." The study suggests that the characteristic of a high-ranking builder is one that offers "consistently high and above-average satisfaction across time in the home," says Sonkin. "There are lots of ways we could show this information, and this is the way we choose to reflect it."
Where many industry insiders have expressed concern, however, is with the statistical significance of the data.
The survey actually works like this: The study, which is funded by J.D. Power, includes all "qualifying" builders in the 21 markets it surveys with at least 150 public closings from the previous calendar year listed in public record. In April and May, a four-page, closed-end survey is mailed to home buyers. A minimum of 50 responses must be returned for a builder to be included in the ranking. The annual syndicated study results are released publicly every September.
J.D. Power applies an algorithm to each builder company, based on the number of names in public record, to establish survey minimums. "For a builder with 150 closings, the minimum return is 50 responses," says Sonkin. The return rate averages one-fourth to one-third of what's in public record, she says. "We have a statistically representative sample, or we don't include that builder."
"What really counts is the size of the sample and what kind of responses they are generating," argues Bob Mirman, CEO of Eliant, an Irvine, Calif.-based customer satisfaction solutions provider to the building industry. "It's irregular to position one's data as being accurate when you're only getting 20 [percent] to 25 percent or even 30 percent response. The confidence level statistically is such that you can't support [the results]." In contrast, most builders strive for a much larger sample with their individual survey methods. "We are able to reach 90 [percent] to 95 percent of our customers," says Mike Humphrey, vice president of operations at David Weekley Homes. "We'll do over 8,000 surveys this year."
"We are ISO-9001 certified, we send our methodologies to major universities for evaluation, and we have our own quality assurance measures in place," says Sonkin, who contends the results are within acceptable limits to make the relative rankings valid.
Improving the Lists
In order to prevent bias, J.D. Power obtains customer lists from public record instead of the builders. However, this method has also proven to be less than perfect. "They have the right idea and the process is reasonable, but public record has inherent flaws", says Mirman. The first year the study was conducted in Southern California, John Laing Homes supported the study. "I am committed to buyers, so I wanted to give them a chance," says Webb. But when the results were announced, John Laing was ranked "below industry average" in overall satisfaction. "We couldn't believe it," says Webb. After studying the results in detail, John Lang determined the data was wrong. "We were lumped in with other builders," says Webb. "It included data from three communities that we didn't build."
Since that time, the questionnaire has been improved to minimize confusion. The survey now asks buyers to identify a builder from a code sheet and write in the name of their community. "Some builders close under a limited partnership, some close under their name, some close under the name of the community -- lots of different ways," says Sonkin. "That's why we ask for the verification back from the home buyer."
Despite these efforts, though, problems still occur. Case in point: Shea Homes. "Last year, the data that we received as a builder subscriber went to the community level of detail," says Martha Baumgarten, communications manager at Shea. "After reviewing the raw data, we found that it was difficult to determine what communities were actually being referenced. Respondents had a variety of ways of identifying their communities. Since we may have several active communities in any given city or master plan, without the specific home site addresses, it was impossible for us to sift through this so that we could accurately get to community level performance detail."
"This time around, after J.D. Power and Associates obtained their sampling from the public record and sent out their surveys last spring, we asked them to attempt to validate the responses by cross-referencing them with a list that we provided," says Baumgarten. "If this approach is successful, we should have a clean list of community names for our internal data analysis purposes."
To date, cross-referencing builder lists with public record has not been a standard practice proactively offered by J.D. Power. According to Sonkin, this is something the company may look at in the future. For now, it's up to builders to take the initiative. "If a builder is concerned about what we are mailing out, we are happy to discuss that with them," says Sonkin. "We're happy to confirm or verify whether their closings are listed in the public record. We offer builders several opportunities to discuss any concerns. Some have taken that opportunity."
To some, that kind of arrangement, as well as the close relationships developed between J.D. Power and builder clients, continues to fuel skepticism about whether the relationship skews the results. "They write the test, administer the test, and then they're consulting to those who want to take the test. It creates inherent conflict, especially when you have to pay for it," says Arvida's Casey. "They either need to be a consultant or an impartial facilitator, but not both." For those who don't score as well as they would like, J.D. Power's proprietary services are recommended to help identify problem areas and raise levels of satisfaction. But as one regional president for a top builder, asking not to be named, put it bluntly: "It's one of the biggest shakedowns in the world."
Putting It To Use
Despite the controversy, the study continues to grow in importance and value to many builders. "Are we [as an industry] better off with them than we are without them?" asks Webb. "I'd say yes. But they're not a panacea. Everyone needs to be clear about what they are as opposed to what is purported."
While the data may not provide a detailed direction for operational processes, builders are putting it to use by overlaying it with internal data for marketing reasons. "They give you information that you can use to go back and validate," says Humphrey. "We've got so much data, sometimes we lose sight of the primary drivers. It adds emphasis to where you need to spend your time."
For some, the overlay of J.D. Power data is easy to correlate. "I can predict our [J.D. Power] scores by looking at our internal results," says Laing. "They track right on." For others, the drivers are different. "Ours don't necessarily match the relative weightings of J.D. Power drivers," says Casey. "I pay attention, but I'm not going to build my business around it. The key is, I know customer service is a core part of our business, in much more detail."
George Seagraves, Northeast regional president for D.R. Horton, agrees: "In real estate, we're all concerned about customer service because of referrals, but we've found location to be the key driver with value of the product second" (See "Driving Satisfaction" chart above). Regardless, most agree the survey doesn't compare to in-depth research. "We don't learn a great deal from the diagnostics of J.D. Power," says Laing. "They are a validation of our strategy."
As a marketing tool, however, it can't be beat. "We use it every single day in every market we can," says Laing. "Our repeat and referral business has almost doubled in the past three years. You can point to our ability to put that newspaper article out there that says, 'We won.' There is a tangible value to repeat/referral business; it's fabulous business -- the best business you can get. We're approaching a 50 percent referral rate within the company; that's four billion dollars worth of sales."
"Right or wrong," concedes Casey, "at least it's a standard method to compare competitors. You can see if the strategies you're using are working compared to the strategies your competitor is using. It's the one spot where you have a common denominator."
Under Pressure
The competitive pressure to win or place in the rankings, however, is also beginning to change builder behavior. "Our research and experience shows that the drive to solicit positive J.D. Power surveys causes some builders to go to extremes that could bias the information," observes Shea's Baumgarten.
In some instances, builders have sent letters to homeowners alerting them to the fact that J.D. Power surveys are being mailed and strongly encouraging them to respond. In other instances, builders have sent representatives door-to-door to ask buyers "what can we do for you?" just prior to the survey period. One building executive, who asked not to be named, recounted how a fellow employee who had purchased a home from a competing builder was offered a cash incentive to return their J.D. Power questionnaire to the builder's representative. Accounts from other builders, which could not be confirmed, suggest such tactics go beyond isolated incidents.
Industry consultants see the effects as well. "I've picked up several new clients as the pressure to compete on J.D. Power scores [increases] in some markets," says building industry quality consultant Carol Smith. "In some national companies, the competition between divisions is almost a greater stress than the competition between the division and their local competitors. Where bonuses have been tied to survey results, the results actually affect livelihoods."
"Instead of making changes for the better, builders may become focused on winning the award. If you know how the mechanisms work, you find that people are adjusting their internal processes," says Casey. "There's a reduction in the cycle of improvement of customer satisfaction."
There's no question, J.D. Power has a lot at stake to make sure its survey remains above reproach. Sonkin admits that she has heard about these scenarios. "We can't take action on hearsay, but if we find out that something is illegal, or threatening, or very serious, we will pull that company out of the study," says Sonkin. "We've done it in other industries -- we have no hesitation to do that."
"Finding a way to work with builders to avoid [inappropriate] tactics would definitely be in the best interest of both the consumer and the building industry," says Baumgarten. "The more that J.D. Power and Associates is aware of this and works collaboratively with builders, the more valid and valuable the information provided to the public and to the subscriber builders will be."
Our experience in working with J.D. Power and Associates has been very positive," says Baumgarten. "They have listened to our concerns and sought our input. Whether or not the input they get from Shea Homes and other builders is able to influence improvements remains to be seen."
One thing does seem certain: With plans to expand into even more markets next year, J.D. Power's influence in setting the home building industry's benchmark for customer satisfaction is expected to grow even more powerful.
Define and Defend
|
When it comes to marketing benefits, Pulte's vice president of customer satisfaction, Alan Laing, says sweeping the top spot in 12 J.D. Power regions goes a long way in supporting Pulte's broader branding goals. Pulte leverages their wins to: |
Define themselves in a competitive environment. "We believe with our heart that the one thing that is non-negotiable are delighted customers," says Laing. While winning isn't a driver on Wall Street, Laing says: "A company with a fabulous brand is worth more than a company without one. Over time, that, along with other variables, will create a company that is different from others." |
Defend themselves against the insurance industry and increasing litigation. "To defend ourselves, we want to have the ability to reliably build high-quality homes on time that don't break down and fail," says Laing. "The risks affiliated with that, in the future, are going to be considerable." As a result of the awards, Laing speculates the company will see a discount in insurance premiums in the next three to five years. "Even if it means that we would self-insure and potentially take some of that risk because we don't feel like we're getting value from the insurance premiums," says Laing. |
Discipline themselves. "People are frustrated because they are challenging their organizations and saying 'we've gotta win [the J.D. Power award]', but they haven't done the hard work of preparing the organization to win," says Laing. "It's like putting a minor-league sports team in a major-league game. You're gonna get killed." We consider ourselves a professional organization that understands what it takes to win and have been very disciplined in preparing our organization to do so." |