More actions
No edit summary |
(Repair MoniWiki formatting after migration) |
||
| (17 intermediate revisions by 3 users not shown) | |||
| Line 1: | Line 1: | ||
__TOC__ | __TOC__ | ||
= machine learning = | = machine learning = | ||
머신 러닝의 세가지 분류 | |||
# Supervised learning | # Supervised learning | ||
# Unsupervised learning | # Unsupervised learning | ||
# Reinforcement learning | # Reinforcement learning | ||
== supervised learning == | == supervised learning == | ||
* 학습을 시킬 때 input으로 | * 학습을 시킬 때 input으로 feature(입력값)와 label(원하는 결과값)을 함께 전달 | ||
* Learning from difference between prediction and | * Learning from difference between prediction and label | ||
* e.g. mnist, classification | * e.g. mnist, classification | ||
== unsupervised learning == | == unsupervised learning == | ||
* input: | * input: feature만 입력, 보통 projection등으로 feature의 차원을 축소시킨다. | ||
* Cluster by distance between inputs | * Cluster by distance between inputs | ||
* Human can't predict the outcome | * Human can't predict the outcome | ||
| Line 17: | Line 18: | ||
* input : environment, reward, output : action | * input : environment, reward, output : action | ||
* Learn from try | * Learn from try | ||
* Model free | ** Model free: 게임의 규칙을 알려주지 않음 | ||
* e.g. game play, stock trading | * e.g. game play, stock trading | ||
== reinforcement learning == | == reinforcement learning == | ||
| Line 33: | Line 34: | ||
== 실습 == | == 실습 == | ||
* [https://gym.openai.com gym]: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함. | * [https://gym.openai.com gym]: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함. [https://github.com/openai/gym github]에 코드가 공개되어 있다. | ||
** 오늘 실습할 [https://gym.openai.com/envs/CartPole-v0 cartpole] | ** 오늘 실습할 [https://gym.openai.com/envs/CartPole-v0 cartpole] | ||
* 필요한 라이브러리: numpy, gym, tensorflow 필요 | * 필요한 라이브러리: numpy, gym, tensorflow 필요 | ||
$ pip install gym | $ pip install gym | ||
$ pip install tensorflow | $ pip install tensorflow | ||
# cartpole 실행을 해보자! - [https://github.com/Rabierre/cartpole/blob/master/cartpole_init.py cartpole_init.py] | === 순서 === | ||
# 일단 cartpole 실행을 해보자! - [https://github.com/Rabierre/cartpole/blob/master/cartpole_init.py cartpole_init.py] | |||
# random action(왼쪽, 오른쪽)을 하는 cartpole - [https://github.com/Rabierre/cartpole/blob/master/cartpole_random.py cartpole_random.py] | # random action(왼쪽, 오른쪽)을 하는 cartpole - [https://github.com/Rabierre/cartpole/blob/master/cartpole_random.py cartpole_random.py] | ||
# q-network(q-learning의 NN버전) - [https://github.com/Rabierre/cartpole/blob/master/cartpole_qnetwork.py cartpole_qnetwork.py] | # q-network(q-learning의 NN버전) - [https://github.com/Rabierre/cartpole/blob/master/cartpole_qnetwork.py cartpole_qnetwork.py] | ||
| Line 46: | Line 47: | ||
== Reference == | == Reference == | ||
* 발표 슬라이드: [https://slides.com/rabierre/playing_a_game_with_rl slide] | * 발표 슬라이드: [https://slides.com/rabierre/playing_a_game_with_rl slide] | ||
* | * 실습코드: [https://github.com/Rabierre/cartpole github] | ||
* 논문: [https://arxiv.org/abs/1312.5602 Playing Atari with Deep Reinforcement Learning] | * DeepMind의 DQN 논문: [https://arxiv.org/abs/1312.5602 Playing Atari with Deep Reinforcement Learning] | ||
* Tensorflow tutorial: [https://github.com/golbin/TensorFlow-Tutorials/tree/master/10%20-%20DQN DQN] | * Tensorflow tutorial: [https://github.com/golbin/TensorFlow-Tutorials/tree/master/10%20-%20DQN DQN] | ||
== Furthermore == | == Furthermore == | ||
| Line 53: | Line 54: | ||
** [http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html 강의노트] | ** [http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html 강의노트] | ||
** [https://www.youtube.com/watch?v=2pWv7GOvuf0 강의 영상] | ** [https://www.youtube.com/watch?v=2pWv7GOvuf0 강의 영상] | ||
* Gitbook: [https://www.gitbook.com/book/dnddnjs/rl/details Fundamental of Reinforcement Learning] | * Gitbook: [https://www.gitbook.com/book/dnddnjs/rl/details Fundamental of Reinforcement Learning]. 한글로 되어 있다! | ||
* [[Machine Learning]] | * 레퍼런스 모음: [[Machine Learning]] | ||
== | == 후기 및 기타의견 == | ||
Latest revision as of 00:34, 29 March 2026
machine learning
머신 러닝의 세가지 분류
- Supervised learning
- Unsupervised learning
- Reinforcement learning
supervised learning
- 학습을 시킬 때 input으로 feature(입력값)와 label(원하는 결과값)을 함께 전달
- Learning from difference between prediction and label
- e.g. mnist, classification
unsupervised learning
- input: feature만 입력, 보통 projection등으로 feature의 차원을 축소시킨다.
- Cluster by distance between inputs
- Human can't predict the outcome
- e.g. clustering
reinforcement learning
- 일종의 unsupervised learning
- input : environment, reward, output : action
- Learn from try
- Model free: 게임의 규칙을 알려주지 않음
- e.g. game play, stock trading
reinforcement learning
- Q learning
- Q learning + Neural Network
- DQN : Deep Q Learning
- hidden layer를 늘리는게 다가 아니다!
Basic knowledge
- MDP : Markov Decision Process
- Bellman equation
- Dynamic programming
- Value, Polish
- Value function, Polish function
- Value iteration, Polish iteration
실습
- gym: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함. github에 코드가 공개되어 있다.
- 오늘 실습할 cartpole
- 필요한 라이브러리: numpy, gym, tensorflow 필요
$ pip install gym $ pip install tensorflow
순서
- 일단 cartpole 실행을 해보자! - cartpole_init.py
- random action(왼쪽, 오른쪽)을 하는 cartpole - cartpole_random.py
- q-network(q-learning의 NN버전) - cartpole_qnetwork.py
- DQN - cartpole_dqn.py
- 2015에 Deep Mind에서 발표한 DQN - cartpole_dqn2015.py
Reference
- 발표 슬라이드: slide
- 실습코드: github
- DeepMind의 DQN 논문: Playing Atari with Deep Reinforcement Learning
- Tensorflow tutorial: DQN
Furthermore
- David Silver의 강의
- Gitbook: Fundamental of Reinforcement Learning. 한글로 되어 있다!
- 레퍼런스 모음: Machine Learning