Convolutional Models for Robot Imitation and 1st-3rd-Person Learning
- 작성자관리자
- 배포일2017.07.17
- 조회수326
ㅇ 제목 : Convolutional Models for Robot Imitation and 1st-3rd-Person Learning
ㅇ 일시 : 2017년 7월 17일(월), 11:00~
ㅇ 장소 : ETRI 12동 713회의실
ㅇ 강사 : Indiana University, 유상원 교수
ㅇ 요약 :
In this talk, we discuss a new approach that allows robot learning of new activities from unlabeled human example videos. Given videos of humans executing an activity from their own viewpoint (i.e., first-person videos), our objective is to make the robot learn the temporal structure of the activity as its future regression network, and learn to transfer such model for its own motor execution. We present a new fully convolutional neural network architecture to regress the intermediate scene representation corresponding to the future frame, thereby enabling explicit forecasting of future hand locations given the current frame. Furthermore, we introduce the problem of establishing person-level correspondences across first- and third-person videos, and describe a new semi-Siamese Convolutional Neural Network architecture to address this novel challenge.