More actions
No edit summary |
No edit summary |
||
| Line 23: | Line 23: | ||
== reinforcement learning == | == reinforcement learning == | ||
* Q learning | * Q learning | ||
* Neural Network | * + Neural Network | ||
* DQN : Deep Q Learning | * DQN : Deep Q Learning | ||
== Basic knowledge == | == Basic knowledge == | ||
Revision as of 04:28, 1 July 2017
machine learning
- Supervised learning
- Unsupervised learning
- Reinforcement learning
supervised learning
- 학습을 시킬 때 label에 정답이 있는 것
- Need input, target
- Learning from difference between prediction and target
- e.g. mnist, classification
unsupervised learning
- label 이 미리 정해져 있지 않은 것
- Need input
- Cluster by distance between inputs
- Can't predict outcome
- e.g. clustering
reinforcement learning
- 일종의 unsupervised learning
- input : environment, reward, output : action
- Learn from try
- Model free
- e.g. game play, stock trading
reinforcement learning
- Q learning
- + Neural Network
- DQN : Deep Q Learning
Basic knowledge
- MDP : Markov Decision Process
- Bellman equation
- Dynamic programming
- Value, Polish
- Value function, Polish function
- Value iteration, Polish iteration
실습
- 필요한 라이브러리numpy, gym, tensorflow 필요
$ pip install gym $ pip install tensorflow
- cartpole 실행을 해보자! - cartpole_init.py
- random action(왼쪽, 오른쪽)을 하는 cartpole - cartpole_random.py
- q-network(q-learning의 NN버전) - cartpole.py
- DQN - cartpole_dqn.py
- 2015에 Deep Mind에서 발표한 DQN - cartpole_dqn2015.py
reference
- 발표 슬라이드: slide
- 코드: github
- 논문: Playing Atari with Deep Reinforcement Learning