Toggle menu
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

머신러닝스터디/2016/2016 07 09

From ZeroWiki
Revision as of 03:15, 10 July 2016 by 114.108.95.133 (talk)

[[pagelist(^(머신러닝스터디/2016))]]

내용

  • Embedding에는 word index가 필요함.
    • 초기에 Tokenizer로 word frequency를 input으로 썼는데 학습이 잘 안됨.
    • [1]
tokenizer = Tokenizer(nb_words=1000)
X_train = tokenizer.sequences_to_matrix(X_train, mode="freq")
  • optimizer
    • adamax 를 썼는데 accuracy가 50% 대에 머무름
    • tensorflow는 adamax를 제공하지 않음. keras 자체 구현됨(code).
  • 적절한 batch size
    • batch size가 너무 작으면(e.g. 32) 학습이 오래 걸린다.
    • 반면 너무 크면 메모리를 많이 사용하게 된다.

코드

import keras
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM

(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=1000)

from keras.preprocessing.sequence import pad_sequences
X_train = pad_sequences(X_train, 1000)
X_test = pad_sequences(X_test, 1000)

model = Sequential()
model.add(Embedding(1000, 64, input_length=1000))
model.add(LSTM(output_dim=32, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dense(16, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(8, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))

model.compile(loss="binary_crossentropy", optimizer="adagrad", metrics=["accuracy"])

model.fit(X_train, y_train, batch_size=500, nb_epoch=100)
model.evaluate(X_test, y_test, batch_size=1000)
pred = model.predict(X_test, batch_size=20000)

print (pred[0], y_test[0])
print (pred[1], y_test[1])
print (pred[2], y_test[2])

다음 시간에는

  • Coursera 동영상 week 7 보기

더 보기