Projects

Read about our current projects on Inverse Reinforcement Learning, ,

and aaaaaaaa


Inverse Reinforcement Learning

Key people: Jie Huang

Inverse Reinforcement Learning is mainly for complex tasks where the reward function is difficult to formulate. We hope that this method can find an efficient and reliable reward function. We assume that when an expert completes a task, his decision is often optimal or close to optimal. When the cumulative reward function expectations of all policies are not greater than the cumulative return expectations of expert policies, the corresponding reward function is the reward function learned according to the expert demonstrations.</br>

In general, the inverse reinforcement learning is to learn the reward function from the expert demonstrations, which can be understood as explaining the expert policy with the reward function we learned.When learning policies based on optimal sequence samples is needed, we can combine reverse reinforcement learning and deep learning to improve the accuracy of the reward function and the effect of the policy.</br>

Our work is to use the imitation learning algorithm or inverse reinforcement learning algorithm to solve related complex tasks, and apply it to the robot (Figure 1 below) and its simulation environment (Figure 2 below), so that it can be learned using the training set demonstrated by the experts and get the Same effect as expert .</br>


Compositional generalization in minds and machines

Key people: …

People make compositional generalizations in language, thought, and action. Once a person learns how to “photobomb” she immediately understands how to “photobomb twice” or “photobomb vigorously.”