Scaling translations using AI & automation

Scaling translations using AI & automation

Scaling translations using AI & automation 1920 1465 XTRF Summit

In this XTRF Summit #online 2021 session, David Meikle, CTO of Lingo24, shared how they’ve used AI and automation to scale operations and do more, faster and easier. This article is based on that session. Discover what motivated them to take this approach, what they did, and the lessons they learned along the way.

 


Lingo24 is a global language services provider. Headquartered in Edinburgh, with offices in Europe, the Americas, and Asia, and operating a 24/7 delivery model. We’re working with some of the world’s biggest brands to achieve their global impact. Our aim is to combine human creativity with the power of AI to help our customers get their global content right.

The problem – why we needed to scale

Everyone in the translation industry is familiar with the challenge – in this digital age content is being created much faster than anyone can translate. In fact, according to Sintel, 90% of the world’s data was generated just in the last two years.

Allied to the growth in content is the impact of globalization, growing global markets are larger (or set to become larger) than traditional existing ones. However, whilst there is this growth in content and demand for it in other languages and regions, budgets are not growing at the same pace.

And at Lingo24, with the market growing incredibly fast around us, we found ourselves wondering, how do we scale and keep up in a way that’s sustainable?

In 2020, this was further compounded by the pandemic. Traditionally offline businesses moving digital, digital businesses thriving, and improvement projects that were on hold for the future now accelerated during the downtime. All increasing demand on our industry.

The solution – improving time to market with AI & automation

To meet the challenge, we decided to set ourselves a north-star metric; the “time to market” of the content we worked on for customers.

Our thesis was that by working on the areas that contribute that metric we would find ways to scale, sustainably.

The key question we asked ourselves was how do we deliver content to our customers faster, without impacting quality?

Aiming to do this, we broke the problem down into three key areas:

  1. Content processing and quoting.
  2. Finding the right match / setting up for success.
  3. Translation and production processes.

Content processing and quoting

Our main aim in this area was to focus on how content came in and out to Lingo24 and how it was quoted.

Early on we made the decision to put the power in our customers’ hands, giving them an easy-to-use portal (and API) that could be set up to their preferences using tailored defaults that can easily be overridden.

Using this approach we automated the ordering channel for our business, creating rich customer profiles, with customer-specific pricing, workflows, filters, and pipelines. These profiles drive the automation in our internal systems and selection of assets, allowing us to achieve real-time pricing and quoting.

Even where content needs add-on services or human intervention, it is supported through automation with allocated tasks sent to the relevant internal team for action to be taken, and a notification is sent back to the customer when ready.

These changes have had a massive impact, with:

  • Over 95% of orders coming through an automated channel.
  • Over 79% of customized filtering handled automatically by our system.
  • The majority of quotes (where automated approval isn’t in place) done without a need for our team’s intervention.

Finding the right match / setting up for success

Our aim here was to rework our internal processes to allow our project managers to manage projects, rather than administer them, and to support translators by giving them what they need in order to deliver.

This stage was the hardest of the three. It involved:

  • Building rich translator profiles – which contained key information on them, their expertise, and performance, to aid selection.
  • Enhanced subject matter selection – as generic models can miss company or domain-specific subtlety and couldn’t be relied on out-of-the-box, and
  • Providing customer information and reference material to the translator – making sure the right information is always available based on the specifics of the project.

But arguably the most important and most challenging aspect was building the technology to identify the right translator to work on a job.

Partly, this was down to the level of customization we’d built into customer workflows and profiles, adding a significant number of variables to consider in making such a decision.

And partly it was because we didn’t truly understand all the factors that influence a PM’s decision to choose a particular translator (more on this later), meaning our initial simplistic views were not fit for purpose.

Translation and production processes

Finally, like others in the industry, we aimed to streamline the translation process to support the increasing demand and grow together with our customers.

In practice, this actually involved taking the bold decision of building our own online CAT tool – a decision I sometimes questioned during the early days when there are many off-the-shelf tools available! But in the end, controlling our platform has been central to the depth of integration of processes and provided a platform for innovation.

In this platform, we focused on building flexible workflows and building on the customer profiles to focus on quality at the source, as all customers are unique. We also used AI to enhance our quality checks, checking for things like drift in meaning between source and target segments.

We were an early adopter of Machine Translation, using statistical methods and now neural methods. We made an investment in developing our own Adaptive Neural Machine Translation technology and engines – these are engines that learn from the real-time edits of translators and reviewers.

Using this technology we’ve focused on building and deploying customized, adaptive MT engines tailored to our customers, using the data instrumentation in our platform to make sure it actually has the desired impact for everyone.

Furthermore, as we still find many customers on-board with us with low volume or low-quality assets, we’ve used AI techniques – such as our Neural Aligner and TermFinder – to build linguistic assets (such as TMs and termbases or glossaries) for our clients, which helps them produce content faster.

The result – doing more, faster, and easier

By combining automation with AI, we’ve seen a number of improvements:

  • Deliver more quickly – with throughput times being slashed
  • Improve quality – with previous quality thresholds beaten
  • Deliver at scale – with the number of projects and tasks handled per PM increased by 181% and 104% respectively

Ultimately, we’ve been able to scale our business and successfully deliver more work, more easily. In turn, this has allowed us to deliver greater value to our customers.

One example is a leading US-based electronics distributor, where working with us helped them grow their international share of the revenue from 15% to 55% in 4 years by reducing the time to launch their products globally, from 4 weeks to 5 days.

Another is a leading publisher of scientific, technical, and medical content, where we enabled them to acquire thousands of new customers across Europe by translating over 1M words in twelve weeks to support the launch of a new online learning offering in time for the new academic year.

Lessons learned – tips for others

1. Don’t underestimate the power of mental models

Mental models are people’s ideas, attitudes, decision-making approaches, and preconceptions. In this context, it relates to their concerns about technology, and by extension, their approach to adopting new systems.

Whilst in our technology team we thought we knew all the major factors influencing the selection of the right translator for the task, it became clear very quickly that our Delivery team had a far richer decision-making process.

To see where we were having an impact, we tracked what position the chosen translator was in our ranking, targeting them to be in the top three and ideally the first page. However, when we first turned it on we found that they ignored its output more often than they used it, selecting people on page 19 or 25!

When we drilled into this, we found that we had gaps in our translator profiles and decision points in the algorithm, requiring us to build these out further to match the real work decisions.

In retrospect, we should have done more work to understand our team’s current mental model, and to create a new model which addressed how they would work with this new system. Doing this would have made our algorithms more accurate earlier – and the transition to the new system much smoother!

2. Plan and prepare

Another key lesson was that we underestimated the amount of data work involved. Beyond the building of the actual system components, we spent a lot of time simply trying to understand how things, people, and processes worked.

This then triggered a lot of work to make sure we were capturing the data required to support the decision we’d like the system to make, and support the operational procedures.

As data is the heart of AI and Automation-driven systems, this was an iterative process and it took a series of trials (and errors) to achieve the desired impact and get users to accept the new approach. Make sure you allow for plenty of time at these stages.

David Meikle: Scaling translations using AI & automation

3. Use AI where it can have an impact

There is a massive amount of hype around AI at the moment, which makes it easy to fall into the trap of using it in the wrong places.

The current state-of-the-art in AI is great at certain types of tasks, like classifying or clustering, generation or recommendation, with its performance typically defined at training time (i.e. when the model is built). When used in the right place it can significantly enhance a process or feature.

Like many others, we sometimes made the mistake of trying to use AI in places where it wasn’t the best tool for the job, as it felt cool or trendy. We quickly learned that combining it together with traditional computation, created a more powerful solution, using the best of both approaches.

So do look to use AI but use it where it makes sense and adds value. Combine it with traditional computation, to enhance or enrich a process. Use it to support people, to amplify their impact, and create that perfect combination.

 

David Meikle
CTO, Lingo24

XTRF Summit

XTRF Summit brings together prospective and current customers, as well as wider localization industry stakeholders for a day of learning and knowledge‑sharing. The action-packed program includes networking sessions to mix and mingle with colleagues from across the globe, eye‑opening panel discussions, and live interactive presentations led by some of the best in the business. It’s an unmissable opportunity for members of the localization community to help each other be better prepared for this ever-changing market.

All stories by : XTRF Summit