There’s a lot of conversation these days around deep learning, specifically around its remarkable successes in solving challenging reinforcement learning (RL) problems. That said, deep learning still suffers from the need to engineer a reward function that not only reflects the task, but is also carefully shaped.
It’s a challenge Futureforce PhD Intern Hao Liu has worked to solve, co-authoring the research paper “Competitive Experience Replay,” with Research Scientist Alexander Trott, Director of Research Caiming Xiong and Salesforce Chief Scientist Richard Socher, which was recently accepted at the Seventh International Conference on Learning Representations (ICLR).
In the below Q&A, Liu explains the groups proposed method, and the impact it could have on our everyday lives.
What is your primary focus area?
I focus on machine learning, reinforcement learning, NLP, and their various applications, with a goal to understand and enable more general machine intelligence and bring its benefits to everyone.
Tell us about your ICLR research.
In our research, we consider the problem that reinforcement learning (RL) is challenging when no dense reward function is provided—which limits the applicability of RL in the real world since both RL and domain-specific knowledge are required. It is therefore important to develop algorithms which can learn from a binary signal indicating successful task completion or other unshaped, sparse reward signals. We propose a novel method called Competitive Experience Replay which efficiently supplements a sparse reward by placing learning in the context of an exploration competition between a pair of agents. Our method complements the recently proposed Hindsight Experience Replay (HER) by inducing an automatic exploratory curriculum.
What was your reaction upon finding out your research had been selected for ICLR?
So much happiness! The first thing I did was share the great news with my amazing coauthors.
How might your research benefit the average human?
I believe there are two obvious benefits. First, this research will expand the applicability of RL in the real world, such as family cooking robotics. Second, it can be applied to other fields such as conversational agents and automatic program generation.
What’s something that really excites you right now within the world of AI and Machine Learning?
I’m really interested in large-scale model for multi-task learning and online learning. It’s exciting to see what the future holds in this field.
Learn more about our Salesforce Research Team and how you can have an impact as a PhD intern at https://einstein.ai/careers.