Points of View

Memory Networks Part II: AI with memory could be the holy grail of general-purpose intelligence—but is it still a fantasy?

Mar 1, 2019 Reetika Fleming Maria Terekhova

Our recent POV on memory networks illustrated that this technology’s potential to turn today’s highly specialized AI into true general-purpose intelligence is immense. If such general-purpose AI were put in the hands of organizations, it could bring them digital transformation on an unprecedentedly holistic scale. However, inevitably, there is a catch. Although the potential of this technology is clear and pioneering companies are making leaps and bounds toward commercializing it, the fact remains that memory networks are still very nascent.


In this POV, we examine where memory networks are already gaining traction outside of the lab with enterprises and explore the lingering issues that will need to be resolved before memory networks can become a true change agent for enterprises.


Memory networks are already gaining traction in the real world—slow movers should watch and learn from their faster-moving peers


It would be undoubtedly valuable to be able to implement just one form of AI that could then extend to many different business domains. Implementing a single powerful AI would be more practical than buying, implementing, and training a new form of AI for each use case.


It’s not just the adaptability between business functions that promises to deliver value to enterprises. For conglomerates that engage in many different lines of business, an adaptable AI would also be a boon as it would retain knowledge of the company as a whole even as it was assigned varied tasks between business lines. Even more nuanced but no less valuable would be the benefits an AI with memory could bring to enterprises that engage in very contextual transactions, such as booking hotels for a customer or assembling complex machinery, as the AI would be able to balance current inputs with its prior knowledge of a customer or a vehicle model.


In fact, several enterprises are getting an early foot in the door and trying out early forms of the technology. Notable real-world applications to date include


  • Training autonomous vehicles. In collaboration with memory network startup Nnaisense, automotive giant Audi is already using recurrent neural networks (RNNs) to teach its cars to autonomously park in various environments and landscapes, starting with a model car (see Exhibit 1).
  • Predicting financial markets. German asset manager Acatis Investment Management GmbH is using a variant of Nnaisense’s RNN capacity for time-series prediction to forecast the behavior of financial markets for superior asset management in its clients’ portfolios.
  • Machine translation. Memory networks are already being used for real-time automatic translation between languages in messaging apps. A notable example of this is Microsoft’s acquisition of SwiftKey.


Exhibit 1: Nnaisense-enhanced Audi model car


Source: Audi, 2016


For now, these forms of memory network are capable of transferring knowledge between tasks within the same domain, but as we saw in Part 1 of this two-part POV, the ultimate goal would be to develop a solution that could move between functions and even industries. An early form of such a system could be, for instance, a next-generation cognitive assistant capable of carrying out any number of diverse demands with high accuracy.


For the time being, the limited scope of memory networks could even be beneficial for enterprises wishing to test-drive the technology, as it will ensure they start small and in a cordoned environment before scaling the technology enterprise-wide.


Memory networks’ benefits are still potential rather than actual—enterprises must bide their time and choose their moment to invest


There is a good reason to proceed cautiously with memory network implementation for the time being. Although progress is being made in giving deep learning networks memory by technology powerhouses like DeepMind, Nnaisense, and FAIR, there are still significant obstacles to overcome to make the technology fully fit for purpose.


  • Blackout catastrophes. The connections in DeepMind’s EWC algorithm can only become less flexible over time, eventually locking into an unalterable state and making the underlying neural network incapable of recalling existing memories or absorbing new data—a blackout catastrophe. Although DeepMind says this didn’t happen in testing, it acknowledges that this could have been because the system was operating below maximum capacity. This fact raises the risk that if the EWC-enhanced network were expanded to enterprise scale, such a blackout could occur.

  • Diminished performance. DeepMind acknowledged that “[We] have demonstrated sequential learning but we haven’t proved it is an improvement on the efficiency of learning.” In other words, although the EWC-enhanced network could keep knowledge it acquired while learning each game, its performance quality for any single game was inferior to a standard neural network specializing in one game only. The result suggests that although sequential learning is theoretically capable of better performance than standard neural networks, the technology is not yet at that stage.


In a lab environment, such shortcomings may be annoying hindrances to performance improvement. In practice, however, their consequences could be dire. For instance, imagine an autonomous vehicle implanted with a memory network that has been trained to adapt to different environments as it drives, for example, accelerating as it pulls onto a country highway and decelerating and paying higher attention to obstacles, which might be pedestrians, as it passes through a rural town. In the case of a blackout catastrophe, the vehicle could become locked into “highway” mode and fail to adjust to a town landscape, traveling at high speed and becoming a serious risk to passing pedestrians. This illustrates how important it is to judge memory networks not by their ideal end state but by the stage at which they currently are before investing in and deploying them in an organization.


The Bottom Line: Memory networks are in their infancy, but when they reach adulthood, the demand for them will be fierce—start planning now to map how your enterprise will adopt them.


Memory networks are clearly still imperfect, but they are the closest development in AI toward general-purpose intelligence. Enterprises should, therefore, be keeping a very close and constant watch on research in this space because when memory networks do mature, the benefits for enterprises will be significant. Enterprises will have access to AI systems capable of forming context and referring to history—in essence, capable of teaching themselves and moving seamlessly between complex tasks and unexpected circumstances. The efficiency gains, cost savings, and operational optimization such AI systems could bring could be phenomenal. However, for now, enterprises moving cautiously and not over-investing in the technology are on the right track, as the best time to pour money into memory networks will be in a few years when they are more sophisticated and when sequential learning’s benefits move from theoretical to actual.

Sign in or register an account to access HFS' Content

Sign In

Create an account

Enter a phone number
Select the newsletter(s) to which you wish to subscribe.