|
|||||||
|
![]() |
![]()
|
![]() |
Mission Impossible: Improve Quality, Time and Speed At the Same TimeBecome a member of TranslationDirectory.com at just $8 per month (paid per year) Using SAE J2450 to Do the Impossible It is the accepted wisdom of the translation world that translation quality, speed and cost are all locked in some sort of zero sum game. Any improvement in one comes at the expense of one or both of the others. If you need to improve quality, translation takes longer and is more expensive due to extra quality assurance steps. If you need quick turnaround, you pay a premium, and if you want cheap, you might as well throw quality and speed out the window. This so-called “quality-speed-cost triangle” is a staple of books on translation and is taught to students of translation around the world. Based on the triangle, one would expect that implementing a quality assurance (QA) process would improve quality, but raise costs and turnaround time, since it represents an additional process that translation suppliers must carry out. General Motors (GM), however, has found that stressing quality and implementing QA steps as an integral part of the translation process leads not only to significant improvement in quality, but also to dramatic improvements in cost and turnaround time. In this article GM’s Don Sirena, Language Translation Manager in GM’s North American Services and Parts Operations division, reports on how GM utilized the SAE J2450 quality metric to help reduce translation errors by 90%, reduce turnaround time from weeks to days, and lower costs tremendously. Don first reported on this initiative in his presentation (available to LISA General Assembly members) at the LISA Forum USA 2001 in Chicago. This article is an update on that project that shows even more dramatic results than Don predicted three years ago. Almost any process will have inefficiencies that can be corrected to gain improvements at essentially no cost. What this traditional understanding misses, however, is that almost any process will have inefficiencies that can be corrected to gain improvements at essentially no cost. The problem is in identifying these inefficiencies — while some may be obvious, such as manual processing of files that could be easily automated, not all problems are immediately obvious, and some may be so deeply rooted in a process that they cannot be seen at all without careful examination of the entire translation process. QA metrics can isolate problem points in the translation process. GM has found that quality assurance (QA) metrics, like the Society of Automotive Engineers (SAE)’s J2450, can help not only assure quality, but also can isolate problem points in the translation process and can also aid in vendor selection, even before translation has begun. The ability to properly utilize quality metrics throughout the translation process can help identify points of error or inefficiency, and can lead to simultaneous substantial improvements in quality, speed and price. SAE J2450: A Brief History Measuring translation quality has historically been highly subjective and non-standardized since there was no way of gauging quality except based on a gut feel for whether the translation was good or not, and such an approach tends to focus more on issues of style than on the accurate conveyance of information. Needless to say, evaluations of quality would vary widely among individuals and often had as much or more to do with their like or dislike of the source document as with the actual translation of the document. Evaluations of quality often have more to do with like or dislike of the source document than with the actual translation. Because of this difficulty, SAE established its J2450 task force in 1997 under the direction of Kurt Godden of GM, with the goal of establishing a standard quality metric for the automotive industry that could be used to provide an objective measure of linguistic quality for automotive service information, regardless of language or process. The metric became an SAE Recommended Practice in October 2001 and is now progressing toward the level of SAE Standard. In 2001 a European task force was formed to expand usage of J2450 in Europe and assist in the development of training materials and statistical testing. J2450’s approach to quality assurance is quite straight-forward. It bases quality scores on seven types of errors:
Errors in each category can be classified as either major or minor, with a numeric score attached to each error and severity level. The composite score is the weighted sum of the errors normalized by the number of words in the text. This simple statistical approach makes comparison of the quality figures of different texts simple, while examination of the errors in specific categories can assist in the identification of particular problem areas. J2450 is not a stand-alone QA process. Its scope is limited to linguistic/translation errors, not to other problems, such as formatting or presentation errors, that might cause a project to be unacceptable to end-users. Thus J2450 must be part of an overall quality process, and is not a substitute for additional quality processes. However, when properly applied, J2450 provides a way to evaluate quality of one of the most important components of any multilingual project. Since adoption of J2450, GM’s translations have experienced a 90% reduction in translation errors, a 75% improvement in translation turnaround time, and an 80% cost reduction in overall translation costs. As can be seen, none of the error categories focus on stylistics, but rather on problems that can affect the ability of users to understand the information contained in a document. This focus on the information content of text reduces the endless wrangling over translation quality that plagues more subjective measures of quality. (Both SAE J2450 and LISA’s QA Model 3.0 share the same focus on quantifiable measures of quality, although LISA’s model is more focused on the entire localization process, rather than being primarily a translation quality metric.) Because of the focus on measurable error rates, J2450 also can serve as a basis for client-supplier discussions about problems and can serve as neutral ground for evaluation of performance: the problems reported either exist or they do not, and the scores reflect real problems rather than perceived subjective problems. In addition, the J2450 metric can be applied to source documents as well as translated ones, helping identify authoring problems that have downstream effects. GM’s Experience with J2450 Beginning in June 2000, GM Service Operations North America adopted J2450 to aid in assessment of translations of service manuals, and in 2001, one of GM’s translation suppliers began assessment of GM service bulletins using J2450. So far, in assessment of over 1,000,000 words (randomly chosen from over 20,000,000 words translated into seven languages), J2450 has helped bring about significant improvement for GM’s translations. Since adoption of J2450, GM’s translations have experienced a 90% reduction in translation errors, a 75% improvement in translation turnaround time, and an 80% cost reduction in overall translation costs. GM’s results have been consistent among the languages, individuals and processes with which they work, and J2450 is an essential element in GM’s translation process, helping to generate predictable and successful results. How did GM achieve these results? While the connection of J2450 to the dramatic improvements in quality is obvious, the link to improved speed and cost is not immediately obvious. The key is the systematic application of J2450 to help identify problems and inefficiencies in the translation process and to correct them before they create other problems. Such evaluation might, for example, reveal that many errors are being introduced because of problems in translation memory (TM) usage — matched segments might not be found, or matches might be returning out-of-date material that should have been purged from the TM database during maintenance — and allow corrective steps to be taken before errors compound during other processes. One of the most dramatic results of using J2450 was that it allowed GM to essentially eliminate time-consuming post-translation review processes. As shown in Figure 1, initial error rates for raw (unedited) projects were typically much higher than the customer satisfaction threshold and projects required substantial editing to meet quality targets. Using J2450 throughout the process, however, led to a decrease in raw error rates, to the point that they began to converge with the rates for edited projects and were below the customer satisfaction threshold. At that point, there is no reason to include a final editing step, and it can be safely left out of the process, substantially decreasing turn-around time and costs since a labor-intensive manual step is no longer needed. Such results, however, are not achieved overnight, and require consistent dedication and effort: it took GM three years to reach this point, but it now receives consistent benefit from a focus on quality.
Figure 1. Convergence of unedited and edited error scores can lead to elimination of the final editing step. Simply defining a standard and requiring its use is not enough to achieve these results, however, since standards must be understood, interpreted, and applied correctly. If a supplier implements a quality standard incorrectly, the results will obviously not be optimal, and may conceal major problems, even as they reassure the client that the results are of high quality. Because of the very real potential for misapplication of any quality metric, GM found it very useful to test suppliers’ capability to use the J2450 standard prior to commencement of work. In order to validate potential suppliers’ use of J2450, the GM language management team prepared a test consisting of ten sample files (between 325 and 350 words each) in Canadian French, plus GM’s terminology glossary. This information was sent to seven GILT suppliers and each supplier was asked to assess the sample files against the glossary file according to J2450. Each supplier was to calculate its own scores and return the results, along with assessment “mark-ups” to GM purchasing. Because GM had produced the source files, the language management team knew what scores to expect. Three of the suppliers (plus the two existing suppliers) achieved benchmark scores that indicated correct application of J2450, while two did not. An examination of the two companies that failed to properly implement J2450 revealed critical issues in two areas:
GM’s trials were able to identify issues with how potential suppliers were able to use J2450. In addition, results of the trials were found to be indicative of overall quality, production time and cost to GM. For the first time GM was able to use a proven and object measure for evaluation of translation quality, timing and cost. GM’s experience has convinced it of the appropriateness of SAE J2450 as a valid tool for measuring translation quality. The tests used to evaluate vendors were both fair and accurate, and use of such tests in the bid process helps determine capability early on. Such tests do not unfairly disadvantage any individual supplier, especially since the purpose of the test and the interpretation standards are made known before the actual tests are carried out. Conclusion QA has traditionally been seen as an add-on step at the end of the translation process, but this view ignores the real potential for QA to improve the entire translation process. When QA is seen as central to translation and localization efforts, SAE J2450 (and other metrics, like LISA’s QA Model 3.0) can deliver benefits that far exceed improvements in quality. Quality metrics can serve to improve every step of the translation process, from supplier selection, to final delivery. SAE J2450 is based heavily on terminological considerations, and would not be suitable as a quality metric for all vertical industries, nor does it address non-translation quality aspects of the localization process. However, any “terminology-rich” industry, such as medical systems, industrial equipment, or manufacturing should be able to benefit from the use of J2450 in ways similar to what GM has experienced, and other industries could benefit from other quality metrics, such as LISA’s QA Model, that may be more suited to their particular needs. The point is that an emphasis on quality does not have to represent an additional step (and cost), but can be the gateway to simultaneous improvements in all aspects of the localization process. Don Sirena is the business manager responsible for language translation within GM Service and Parts Operations North America. Don has been with GM since 1986 and has primarily been involved in business operations and vendor management. He has been an active member of the J2450 task force for 4 years. His current assignment includes the continuous improvement of language translation relative to Customer Satisfaction and the regional consolidation of all GM North American language translation business activities. Don is also the North American representative to the GM Global Translation Team, which includes GM Europe, GM Latin America and GM Asia Pacific. Reprinted
by permission from the Globalization Insider,
11 May 2004, Volume XIII, Issue 2.2. Copyright the Localization Industry Standards Association (Globalization Insider: www.localization.org, LISA: www.lisa.org) and S.M.P. Marketing Sarl (SMP) 2004
E-mail this article to your colleague! Need more translation jobs? Click here! Translation agencies are welcome to register here - Free! Freelance translators are welcome to register here - Free! |
![]() |
|
![]() |
![]() |
|
![]() |
![]() | |||
Legal Disclaimer Site Map |