상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
155275.jpg
KCI등재 학술저널

Reinforcement learning with one-shot memory

DOI : 10.22819/kscg.2020.33.4.006

게임을 포함한 가상환경 및 현실의 문제를 해결하기 위한 현대의 강화학습에서는 근사함수로써 인공신경망을 사용한다. 하지만 이는 통계 기반이기 때문에 대량의 데이터가필요해서 시뮬레이터가 없는 경우는 사용 및 적용에 애로가 있다. 이때문에 인공신경망은 아직 일상에서 자주 접할 수가 없는데, 대부분의 환경은 시뮬레이터를 만들기 힘들거나 데이터와 보상은 희소하기 때문이다. 이에 메모리 구조를 활용해서 적은 데이터와 희소한보상을가진환경에서빠른학습을할수있는모델을만들었다. 실험에서는기존의 policy gradient와 메모리를 기반으로 open AI CartPole 문제에 도전했다. 이때 이득을평가하는 함수인 Advantage function을 메모리구조를 변형하여 구현하였다. 이후 실험에서모델의학습시편차가커서평균적으로는저조한성적을보였다. 하지만다른알고리즘과의 학습 속도 비교를 통해 100회 이내의 작은 에피소드 내에서 상위 10개, 5개의성적이 타 알고리즘들 보다 더 높은 점수를 획득한 것을 확인하였다. 결론적으로 연구를통해 메모리구조를 사용하는 방법이 적은 데이터에 효과적일수 있다는 가능성을 발견했으며, 향후에는 학습의 편차를 줄이는 기술들에 대한 연구가 필요하다.

In modern reinforcement learning to solve problems in virtual environments and realities including games, artificial neural networks are used as approximations. However, since the current artificial neural network is based on statistics, a large amount of data is required, so if there is no simulator, there are difficulties in using and applying it. Besides, making an accurate simulator is often a very expensive and hard problem in most domains we want to solve, and getting appropriate data and rewards from the environment is also not easy task, since they are mostly partially observable and relatively scarce. Due to these difficulties of acquiring sufficient data, there are certain limitations in utilizing artificial neural networks as function approximators in reinforcement learning. Therefore, by using the memory structure, which is case-based learning, a model that can perform fast learning in an environment with little data and sparse rewards. In the experiment, we challenged the Open AI Cartpole problem based on the existing policy gradient and memory. The advantage function that evaluates the gain, was implemented by transforming the one-shot learning memory structure. In subsequent experiments, the model showed poor performance on average due to large variance during training. However, by comparing the learning speed with other algorithms, it was confirmed that the top 10 and 5 scores obtained higher scores than other algorithms compared to the same episode within 100 small episodes. In conclusion, we found possibility that the method of using the memory structure can be effective for small data. And in the future, research on techniques to reduce the deviation of learning is needed. The model is expected to help the decision-making of artificial intelligence NPCs interacting with users.

1. Introduction

2. Related research

3. Conclusion

Acknowledgement

Reference

로딩중