From time to time, clients ask why their internal customer experience (CX) tracking programs appear to yield different results than independent third-party syndicated studies. In the automotive customer experience industry, the independent studies most clients ask about are the JD Power Sales Satisfaction Index (SSI) and Customer Service Index (CSI) studies, and the MaritzCX New Vehicle Customer Study. This paper will attempt to briefly describe why scores may be different even though both may be correct.
Methodology Differences between CX Studies and Syndicated Studies
Almost always, differences in scores between these two types of studies are due to differences in their purposes and methodologies. Generally, customer experience studies are designed to measure individual dealership performance, diagnose areas of strengths and weaknesses within dealerships, and identify individual customers that need attention. Therefore, most customer experience programs survey customers as close to the event as possible, communicate with customers on behalf of the automotive brand (i.e., surveys are company branded), and as many customers as possible are surveyed. Also, company customer experience surveys usually rely on continuous and direct data feeds of sales and service events, either coming from the company or its dealerships. The advantage of this is that transaction records can be obtained quickly and surveying is usually done within days of the event. A disadvantage, in many cases, is that only certain types of records (usually warranty service records) can be obtained reliably, so in some programs other customers are not surveyed on the automotive service studies.
Syndicated studies, on the other hand, are usually designed to measure brand-level customer experience across all dealerships and compare that to the performance of different brands. Syndicated studies also typically do not receive continuous and direct data feeds of transactions from the automotive manufacturers or dealers. Therefore, syndicated studies usually sample automotive sales and service customers at the national level, the surveys are usually conducted weeks or months after the sales or service event, and the survey is branded to the company conducting the survey, not the automotive manufacturer.
Data Differences Seen Between Internal CX and Syndicated Studies
All of these methodological differences can lead to the results of internal CX studies and syndicated studies appearing not to “match”. Below are examples of how data may not “match” and explanations for the apparent discrepancy between scores:
- Internal studies almost always show higher scores than syndicated studies. This is most likely due to the shorter time frame between event and survey for the CX studies and the fact that CX studies are branded to the automotive manufacturer. Also, most customers are aware that results of CX studies will have an effect on the dealership. Therefore, they may be less likely to give poor scores.
- Sometimes scores seem different between internal CX studies and syndicated studies because the studies differ in the scales they use. For instance, the JD Power SSI and CSI studies use 1000 point scales whereas many internal CX studies use 100 point scales. Therefore, a time trend difference of 91.5 to 92.7 (a difference of 1.2) may be perceived as small and insignificant on an internal CX measure, but the same difference of 915 to 927 (a difference of 12) may be perceived as large on the syndicated measure.
- Sometimes differences in the scores between internal CX studies and syndicated studies may not match due to the time frames in which data is collected for the studies. Virtually all internal CX studies in the automotive industry collect data continuously. However, many syndicated studies collect data during a specified time frame of only a few months. If a brand crisis or other event occurs immediately before or during the syndicated study data collection period, the syndicated study’s findings will be more influenced than the internal CX findings. Similarly, if the event occurs well outside the syndicated study’s data collection period, the syndicated study’s findings will be less influenced than the internal CX findings.
- Usually if one looks at trends over time within internal CX programs compared to trends over time within syndicated studies, the studies tell the same story. However, at times one study may show improvement (or deterioration) whereas the other doesn’t. When this happens, an investigation needs to occur, but usually the reason for the difference can be found. Examples I have seen include:
- Clients noting that their internal CX measure showed improvement but their ranking in a syndicated study had gone down. Investigation showed that the client company’s absolute score rose for both the internal and syndicated studies, but the company’s ranking in the syndicated study went down because other companies had improved more. Oftentimes this occurs when just a few points separate brand-level scores in the syndicated studies.
- A client noting that their absolute score had remained about the same in their internal CX study but had decreased in the syndicated study. Investigation showed that previously in the year the company had removed surveys from customers affected by an issue that was beyond their dealers’ control. Therefore, the internal CX score was not adversely affected by this issue but the syndicated score was affected.
- Sometimes internal CX programs and syndicated programs can appear not to “move together” because different item weightings are being used to calculate the final index score. If the studies weight attributes differently (e.g., Salesperson Knowledge or Service Advisor Knowledge), changes in those attributes will differentially affect the scores of those studies.
Summary
These are just some examples of how internal CX studies and syndicated studies differ and how those differences may be manifested in their findings. Given the vast number of differences between these types of studies, while we should expect them to generally track together, we should also realize that events will affect their results differently and at times cause their results to diverge.
Source: B2C
No comments:
Post a Comment