wordlift demo site

Salesforce Research: Knowledge graphs and machine learning to power Einstein

Explainable AI in real life could mean Einstein not just answering your questions, but also providing justification. Advancing the state of the art in natural language processing is done on the intersection of graphs and machine learning.

A super geeky topic, which could have super important repercussions in the real world. That description could very well fit anything from cold fusion to knowledge graphs, so a bit of unpacking is in order. (Hint: it’s about Salesforce, and Salesforce is not into cold fusion as far as we know.)

If you’re into science, chances are you know arXiv.org. arXiv is a repository of electronic publication preprints for scientific papers. In other words, it’s where cutting edge research often appears first. Some months back, a publication from researchers from Salesforce appeared in arXiv, titled “Multi-Hop Knowledge Graph Reasoning with Reward Shaping.”

The paper elaborates on a technique for using knowledge graphs with machine learning; specifically, a branch of machine learning called reinforcement learning. This is something that holds great promise as a way to get the best of both worlds: Curated, top-down knowledge representation (knowledge graphs), and emergent, bottom-up pattern recognition (machine learning).

This seemingly dry topic piqued our interest for a number of reasons, not the least of which was the prospect of seeing this being applied by Salesforce. Xi Victoria Lin, research scientist at Salesforce and the paper’s primary author, was kind enough to answer our questions.

Salesforce Research: it’s all about answering questions

To start with the obvious, the fact that this paper was published says a lot in and by itself. Salesforce presumably faces the same issue everyone else is facing in staffing their research these days: the boom in the applicability of machine learning in real-world problems means there is a ongoing race to attract and retain researchers.

  
People in the research community have an ethos of sharing their accomplishments with the world by publishing in conferences and journals. That, presumably, has a lot to do with why we are seeing a number of those publications lately coming from places such as Salesforce.

The paper, presented by Lin in the 2018 Conference on Empirical Methods in Natural Language Processing (NLP), was well received. The authors have also released the source code on Github. But what is that all about, and what is the motivation, and the novelty of their approach?

salesforce-einstein-1024x576.png

Salesforce Einstein: A virtual AI assistant embedded in Salesforce’s offering. Salesforce is looking into ways of adding explainable question answering to its capabilities.

For Salesforce Research, it’s all about question answering. This is obvious browsing through their key topics and publications. And it makes sense, considering Salesforce’s offering: would it not be much easier and productive to ask whatever it is you are interested in finding in your CRM, rather than having to go through an API, or a user interface, no matter how well-designed those may be?

Lin said:

“In the near future, we would like to enable machines to answer questions over multi-modal information, which include unstructured data such as text and images as well as structured such as knowledge graphs and web tables. This work is a step towards a building block which enables the question answering system to effectively retrieve target information from (incomplete) knowledge graph.”

She went on to add that Salesforce Research is aiming to tackle AI’s communication problem. Lin and her colleagues work on a wide range of NLP problems, spanning from advancements in text summarization to learning how to build more efficient natural language interfaces to a unified approach to language understanding:

“Deep learning is the umbrella theme of the lab, which means we also work on areas outside NLP, including core machine learning projects such as novel neural architectures and other application areas such as computer vision and speech technology.”

Not tested on real data — yet

Lin also emphasized that deep learning is not the end all. For example, it was pointed out to her that the path-finding approach Lin’s team presented which uses deep reinforcement learning is related to the “relational pathfinding” technique proposed in a 1992 paper:

“The learning algorithm in that paper is not neural-based. My take-away from this is that revisiting earlier findings in inductive logic programming and possibly combining them with deep learning approaches may result in stronger algorithms.”

The obvious point of integration would be Einstein, Salesforce’s own virtual assistant. Based on Lin’s answers, it does not look like this work has been incorporated in Einstein yet, although conceptually it seems possible. Lin explained that this work is a research prototype, using benchmark datasets publicly available to academia.

opera-snapshot-2019-03-18-162015-arxiv-org.png

An incomplete knowledge graph, where some links (edges) are not explicit.

It seems that Salesforce data and infrastructure were not used in the context of the publication. All the data Lin used could fit into a 4G RAM machine. Special data structures for representation and storage to enable fast access of the graph were not really needed, said Lïn:

“I stored facts of the graph in a plain .txt file and read the entire graph into memory when running experiments. This is the common practice of KG research in academia. To apply the model on industry scale knowledge graphs would require special infrastructure.”

Multi-hop reasoning is an effective approach for query answering (QA) over incomplete knowledge graphs. However, there are some issues with this approach: False negatives, and sensitivity to spurious paths. Lin’s work helps address those, largely by adding more links to incomplete knowledge graphs.

One thing we wondered was whether those links are stored, or generated on the fly. Lin explained that so far they have been generating answers on the fly for the prototype. But in real-world the two approaches would most likely be mixed:

“One would cache the links generated, manually verify them periodically and add the verified links back to the knowledge graph for reuse and generating new inference paths. We haven’t tested this hypothesis on real data.”

Graphs and machine learning for the win

Another contribution of Lin’s work is on what is called symbolic compositionality of knowledge graph relations in embedding approaches. Embedding is a technique widely used in machine learning, including machine learning reasoning with graphs. But this approach does not explicitly leverage logical composition rules.

For example, from the embeddings (A born_in California) & (California is_in US), (A born_in US) could be be deduced. But logical composition steps like this one are learned implicitly by knowledge graph embeddings. This means that this approach cannot offer such logical inference paths as support evidence for an answer.

Lin’s approach takes discrete graph paths as input, hence is explicitly modeling compositionality. This means it can offer the user an inference path which consists of the edges existing in the knowledge graph as support evidence. In other words, this can lead to so-called explainable AI, using the structure of the knowledge graph as supporting evidence for answers, at the expense of more computationally intensive algorithms.

knowledgegraphsmachinelearning.jpg

The combination of graphs and machine learning is a promising research direction gaining more attention as a way to bridge top-down and bottom-up AI

Combining graphs and machine learning has been getting a lot of attention lately, especially since the work published by researchers from DeepMind, Google Brain, MIT, and the University of Edinburgh. We asked Lin what her opinion on this is: Are graphs an appropriate means to feed neural networks? Lin believes this is an open question, and sees a lot of research needed in this direction:

“The combination of neural networks and graphs in NLP is fairly preliminary — most neural architectures take sequences as input, which are the simplest graphs. Even our model uses relational paths instead of relational subgraphs.”

Lin mentioned work done by researchers from USC and Microsoft [PDF], which generalizes LSTMs to model graphs. She also mentioned work done by Thomas N. Kipf from the University of Amsterdam [PDF], proposing graph convolutional networks to learn hidden node presentations which support node classification and other downstream tasks.

“It is definitely interesting to see more and more neural architectures specifically catering for which takes general graphs as input being proposed. We are seeing graphs being used to represent relations between objects across multiple AI domains these days. Graph is a powerful representation in the sense that by simply varying the definitions of nodes and edges we can model a variety of data types using it.

While inference over graphs is hard in general, it offers a potential way to integrate multimodal data (text, images, tables, etc.). UC Irvine researchers presented a really interesting paper in EMNLP, which improves knowledge graph completion by leveraging multimodal relational data. Their proposed architecture, for example, takes images and free-form texts as node features.”

The takeaway? It may be early days for graph-based machine learning reasoning, but initial results look promising. So, if one day you see your questions being answered by Einstein, along with supporting evidence for this, you will probably have graph and researchers like Lin to thank for it.

Content retrieved from: https://www.zdnet.com/article/salesforce-research-knowledge-graphs-and-machine-learning-to-power-einstein/.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

× How can I help you?