In recent years, the natural language processing (NLP) community has seen the development of increasingly powerful language models [1, 2], capable of generating textual output that is indistinguishable from human-written text. This includes…
> TL;DR: We theoretically analyze the differential architecture search (DARTS) for understanding the role and impact of skip connections, which inspires a new method for Neural Architecture Search (NAS) using group-structured sparse gates…
We use smaller language models as generative classifiers to guide generation from larger language models. We show that this method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.
TL; DR: We propose a new webly-supervised learning method which achieves state-of-the-art representation learning performance by training on large amounts of freely available noisy web images. Deep neural networks are known to be…
We introduce Photon, a live demo of natural language interface to databases based on our latest research in neural semantic parsing. 🔗 https://naturalsql.com/
August 3 – 7, 2020. Fully virtual sessions. At Salesforce Research Asia, we aim to play a more active role in raising awareness to the exciting possibilities surrounding the use of AI in…
Our Salesforce Research team is inviting submissions from university faculty, non-profit organizations, and NGOs to apply for our Salesforce AI Research Grant.
We are launching an open source collaborative project to build an AI Economist that can be used to guide policy making in the real world.
We invite you to join us in our mission to help improve the world with AI and economics.
We propose a simple causal (unidirectional) language model for Task-oriented Dialogue. SimpleTOD enables modeling of the inherent dependencies between the sub-tasks of task-oriented dialogue, by optimizing for all tasks in an end-to-end manner.