GMS Spotlight. Staying ahead of the curve
By Eric Richard,
Working in the translation and localization industry is like constantly working in a pressure cooker. Customers want to get more content translated into more languages with higher quality on faster schedules. And, while the volume of content is scaling up, the costs of translating that content cannot scale up at the same rates.
What makes this problem even more challenging is that this isn’t a short term issue; the amount of content that is going to be translated is going to increase again next year and the year after that and the year after that, for the foreseeable future.
Because of this, translation providers are constantly under pressure to find ways of eking that next round of efficiency out of their processes and cost out of their suppliers to meet the never-ending demands for more, more, more.
The first year a customer asks for a rate cut, it might be possible to squeeze your suppliers to get a better rate from them. But, you can only go back to that well so often before there is nothing left to squeeze.
The next year, you might be able to squeeze some efficiency out of your internal operations. Maybe you can cut a corner here or there to stay ahead of the curve. But, again, there are only so many corners to cut before you are really hurting your ability to deliver quality results.
So, what happens when you run out of corners to cut and low-hanging fruit to pick? How do you deal with the never-ending demands to do more for less? How can you get a non-linear improvement in your efficiencies to help get ahead of the curve?
THE ANSWER IS TECHNOLOGY.
In the 80’s, the technology solution of choice was translation memory (TM). By deploying TM solutions, translators could reuse their previous work and could suddenly process a higher volume of work than before.
Over the past years, translation memory has spread throughout the entire localization supply chain. Translators and LSP’s now use client-side TM in their translation workbenches to improve their efficiencies. And more and more enterprises are realizing that if they own their own TM, they can cut down on their costs and increase the quality and consistency in their translations.
The great news in all of this is that efficiency across the board has increased.
The tough part is that most of the low-hanging fruit in terms of gaining efficiencies may already be behind some early adopter companies. The reason? TM-based solutions are becoming more and more ubiquitous throughout the translation and localization supply chain. That said, however, there are still many companies out there who are ready to drive even more efficiency from the supply chain and, in some cases, start looking for ways to increase top line revenue opportunities.
Once early leaders recognized the value of TM, the search was on for the next big technology solution that could help them stay ahead of the curve. And the solution came in the form of applying workflow to the localization process; by automating previously manual steps, companies could achieve major increases in productivity and quality. Steps previously performed by a human could be performed by machines, reducing the likelihood of errors and freeing up those people to work on the hard problems that computers can’t solve.
Companies who have deployed workflow solutions into their localization processes regularly see immediate improvements. This rarely means reducing staff. Instead, it often means pushing through more content into more languages faster than before with the same staff.
For many organizations that have not yet deployed workflow solutions, this is a great opportunity to improve their efficiencies. Like TM, however, workflow has already crossed the chasm and is moving into the mainstream. Large localization organizations have already deployed workflow solutions and many have even gone through second round refinements to their systems to get most of the big wins already.
For those customers who have already deployed a workflow solution, the real question is "What’s next?" What is the next generation solution that is going to help them deal with the increases in content and keep their advantage in the market?
It is my belief that the next big wave is going to come by combining together the previous two solutions вЂ“ translation memory and workflow вЂ“ with another emerging technology: machine translation (MT).
Creating an integrated solution that provides the benefits of both translation memory and machine translation in the context of a workflow solution will provide companies with the ability to make headway into the content stack and start translating more and more content that was previously not even considered for translation.
There are many models in which these technologies can be mixed together.
The simplest, and least disruptive, model is to flow machine translation results into the exact same process that is used today. The result is a process that has been dubbed "machine assisted human translation". The process starts just as it would today with the content being leveraged against a translation memory and resulting in a variety of different types of matches (exact, fuzzy, etc.). But, before providing these results to the translator, this new process takes the most expensive segments вЂ“ those that do not have a suitable fuzzy match from TM вЂ“ and runs those segments through machine translation. The end result is that there is never a segment that needs to be translated from scratch; the translator will always have content to start from.
Obviously the devil is in the details here, and the real success of this model will be tied directly to the quality of the results from machine translation. If the machine translation engine results can provide a good starting point for translation, this approach has the ability to increase the productivity of translators.
On the flip side, the most radical model would be to combine machine translation and translation memory together but without any human translator or reviewer involved. The key to this approach is to take a serious look at an issue that is traditionally treated as sacrosanct: translation quality.
In traditional translation processes, quality is non-negotiable. It is simply a non-starter to talk about translating your website, product documentation, software UI, or marketing collateral in anything other than a high quality process.
However, does this same requirement hold true of all of the content that you want to translate? Are there specific types of content for which the quality level is slightly less critical?
Specifically, are there types of content you would not normally translate, but for which the value of having a usable translation is more valuable than having no translation? For example, there may be types of content for which time-to-market of a reasonable translation is more important than taking the time to produce a high quality translation.
For content that fits into these categories, you might consider an approach like the one described above to produce what Jaap van der Meer of TAUS calls "fully automatic useful translation (FAUT)."
It is absolutely critical to understand that this is not proposing that we replace humans with machines for translation. Instead, this is looking at how we can use technology to solve a problem that is too expensive to have humans even try to solve today; this is digging into the enormous mass of content that isn’t even considered for translation today because it would be cost prohibitive to do using traditional means.
The best part of combining machine translation and translation memory with workflow is that the workflow can be used to determine which content should use which processes. The traditional content for which high quality is imperative can go down one path while content that has other requirements can go down another path.
You might think that this is science fiction or years from reality, but the visionary companies in the localization industry are already deploying solutions just like this to help them deal with their translation problems today. They see this approach as a fundamental part of how they will address the issue of the volume of content that needs to be translated.
This solution is in the midst of crossing the chasm from the early adopters to the mainstream market. While translation memory and workflow are by no means mainstream at this point, some of the early adopters of content globalization and localization technologies are already looking for the next advantage, a way to keep up with steadily increasing demands. Clearly, these companies should strongly consider integrating machine translation into the mix.
ABOUT IDIOM® TECHNOLOGIES, INC.
Idiom® Technologies is the leading independent supplier of SaaS and on-premise software solutions that enable our customers and partners to accelerate the translation and localization process so content rapidly reaches markets worldwide. Unlike other companies serving this market, Idiom offers freedom of choice by embracing relevant industry standards, supporting popular content lifecycle solutions and partnering with the industry’s leading language service providers.
As a result, WorldServer™ GMS solutions are fast becoming an industry standard, allowing customers to expand their international market reach while reducing costs and improving quality. WorldServer is used every day by organizations possessing many of the most recognizable global brands to more efficiently create and manage multilingual websites (e.g., AOL, eBay and Continental), localize software applications (e.g., Adobe, Beckman Coulter and Motorola) and streamline translation and localization of corporate and product documentation (e.g., Autodesk, Cisco and Business Objects).
Idiom is headquartered in Waltham, Massachusetts, with offices throughout North America and in Europe. WorldServer solutions are also available through the company’s Global Partner Network™. For more information, please visit www.idiominc.com.
ABOUT ERIC RICHARD - VP, ENGINEERING, IDIOM TECHNOLOGIES
Eric Richard joined Idiom from Chicago-based SPSS, where he served as Chief Architect. Previously, he wore several hats as co-founder, Vice President of Engineering, and Chief Technology Officer at NetGenesis (acquired by SPSS), where he directed the company's technology development.
In 2001, Eric was a finalist in the Ernst & Young New England Entrepreneur of the Year Awards. He is a graduate of the Massachusetts Institute of Technology.
ClientSide News Magazine - www.clientsidenews.com
Please see some ads as well as other content from TranslationDirectory.com: