VENDORiQ: Microsoft Releases Z-code MoE Hyperscale Language Model

The Latest

10 May 2022: Microsoft has integrated the Z-code Mixture of Experts (MoE) models to Translator and other Azure AI services to improve the quality and accuracy of its translation capabilities. Through the Z-code MoE, the models can speed up language translations on Microsoft Word, PowerPoint and PDF files. 107 languages are currently supported. 

Why it’s Important

Pretrained ML models now produce faster translations with consistency and help human translators reduce their workload, especially for repetitive writing and translation tasks. IBRS has observed that hyperscale machine translation has already progressed in terms of computational efficiency. Capabilities such as Z-code save runtime costs by using parameters that are only relevant for specific translation tasks.

However, to match (or sometimes surpass) the quality of human translators, genre-specific translation engines trained specifically on different types of content must be employed. The generic models offered by the hyperscale Cloud vendors are often insufficient. 

Genre-specific machine translation engines involve training highly nuanced models. Solutions such as those from Omniscien Technologies, for instance, provide far more accurate models that can be curated. In addition, these specialised models also allow for the translations to run on an organisation’s own infrastructure, which is a consideration for organisations that need to translate sensitive or private content without digressing from the context of the original text.

Who’s impacted

  • CEO
  • Corporate communications teams

What’s Next?

Machine translation services will eventually make their way into the daily life of most people, much like how global positioning systems (GPS) have been integrated into mobile devices. 

Currently, free machine translation tools such as Google Translate and Bing Translator are not nuanced and far less accurate when compared to the output of human translators. Translation apps such as SayHi, allow speech-to-text translation in real-time while Papago and Waygo feature image recognition that automatically translates text on pages, signs and screen. However, these still cannot produce highly accurate translations based on context and language registers.

As such, translation at a basic level (word-for-word, literal) is not good enough for all use cases. For example, translating medical information, patents, user manuals or outputs for e-discovery requests requires a much higher fidelity of translation that must include referential, cohesive and natural-sounding output. For these cases, consider specialised machine translation solutions alongside (and possibly complementing) the more general offerings from the hyperscale Cloud vendors.

Related IBRS Advisory

  1. Can IBRS provide information on the establishment and maintenance of multi-lingual Web sites?
  2. Software Agents Maturity Model
  3. Managing cultural diversity

Trouble viewing this article?

Search