바로가기 메뉴
본문 바로가기
주요메뉴 바로가기
ETRI소식 상상을 현실로, 진화하는 ICT세상, 고객과 함께 ICT미래를 열어가겠습니다.

???MENU_TITLE_ALT_kor2_04???

Convolutional Models for Robot Imitation and 1st-3rd-Person Learning

  • 작성자관리자
  • 배포일2017.07.17
  • 조회수326

ㅇ 제목 : Convolutional Models for Robot Imitation and 1st-3rd-Person Learning

ㅇ 일시 : 2017년 7월 17일(월), 11:00~

ㅇ 장소 : ETRI 12동 713회의실

ㅇ 강사 : Indiana University, 유상원 교수

ㅇ 요약 : 

In this talk, we discuss a new approach that allows robot learning of new activities from unlabeled human example videos. Given videos of humans executing an activity from their own viewpoint (i.e., first-person videos), our objective is to make the robot learn the temporal structure of the activity as its future regression network, and learn to transfer such model for its own motor execution. We present a new fully convolutional neural network architecture to regress the intermediate scene representation corresponding to the future frame, thereby enabling explicit forecasting of future hand locations given the current frame. Furthermore, we introduce the problem of establishing person-level correspondences across first- and third-person videos, and describe a new semi-Siamese Convolutional Neural Network architecture to address this novel challenge.

TOP