Google previews MUM, its new tech that’s 1,000x more powerful than BERT

Multitasking is the key differentiator, enabling MUM to acquire knowledge, understand and generate language and interpret text, images and video all at the same time.

Chat with SearchBot

Google’s Prabhakar Raghavan showcased a new technology called Multitask Unified Model (MUM) at Google I/O on Tuesday. Similar to BERT, it’s built on a transformer architecture but is far more powerful (1,000 times more powerful) and is capable of multitasking to connect information for users in new ways. The company is currently running internal pilot programs with MUM, although no public rollout date was announced.

Multitasking is the difference. One of the differentiating characteristics of MUM is that it can handle tasks simultaneously. In fact, it is trained on 75 languages and numerous tasks all at the same time. This allows MUM to gain a more complete understanding of information and the world at large, Google said in its blog post.

On stage at I/O, Raghavan provided some examples of the tasks that MUM can handle at the same time:

  • Acquire deep knowledge of the world.
  • Understand and generate language.
  • Train across 75 languages.
  • Understand multiple modalities (enabling it to understand multiple forms of information, like images, text and video).

MUM in action. At I/O, Raghavan used the query “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” as an example that would give present-day search engines some trouble providing relevant results for. In the simulated search leveraging MUM, Google could highlight the differences and similarities between the two mountains and surface articles for the right equipment to hike Mt. Fuji.

Screen Shot 2021 05 18 At 3.05.41 PM
Prabhakar Raghavan providing examples of how MUM might be integrated into Google Search at Google I/O.

Since MUM is multi-modal, it can also understand images and video (in addition to text): “Imagine taking a photo of your hiking boots and asking ‘Can I use these to hike Mt. Fuji?’” Raghavan said, “MUM would be able to understand the content of the image and the intent behind your query.” In this hypothetical scenario, MUM would let the user know whether their gear is capable and point them to a list of recommended equipment on a Mt. Fuji blog.

Why we care. Google is integrating a lot of different tasks together to create one cohesive search experience, but if it succeeds, this could make information more accessible across mediums (text, images and video) and language barriers. If MUM works the way it was shown at I/O, it may enable people to conduct searches that they previously thought were too complicated for a machine to understand. 

As we saw during the outset of the pandemic, when search behavior changes, businesses must adapt. Unfortunately, we’ll have to wait to find out how this may impact search behavior (if it does at all). But if Google delivers on this advancement, competing search engines will have an even tougher challenge when it comes to increasing their market share.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

George Nguyen
Contributor
George Nguyen is the Director of SEO Editorial at Wix, where he manages the Wix SEO Learning Hub. His career is focused on disseminating best practices and reducing misinformation in search. George formerly served as an editor for Search Engine Land, covering organic and paid search.

Get the must-read newsletter for search marketers.