<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mediawiki.zeropage.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=211.104.144.68</id>
	<title>ZeroWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://mediawiki.zeropage.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=211.104.144.68"/>
	<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php/Special:Contributions/211.104.144.68"/>
	<updated>2026-05-15T06:00:08Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.8</generator>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50332</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50332"/>
		<updated>2016-07-26T11:15:30Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
설치 방법&lt;br /&gt;
   $ sudo pip install sklearn&lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 ## And gate&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 ## Or gate&lt;br /&gt;
 y_data = [0, 1, 1, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([1, 1, 1, 1])&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 ## Xor gate&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
&lt;br /&gt;
* non-linear는 분류하지만 linear는 분류하지 못하는 SVC?&lt;br /&gt;
* multi-class에서 분류 시에 voter의 결과에 따라 분류됨&lt;br /&gt;
** 결과에 0이 많으면(And gate) 모두 0이 되고 &lt;br /&gt;
 y_data = [0, 0, 0, 1]면 결과가 [0, 0, 0, 0]&lt;br /&gt;
** 1이 더 많은 경우(Or gate) 학습 결과가 다음과 같음&lt;br /&gt;
 y_data = [0, 1, 1, 1]면 결과가 [1, 1, 1, 1]&lt;br /&gt;
* SVC와 LinearSVC의 차이란?&lt;br /&gt;
* [http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn-svm-svc]에 따르면 둘의 차이점은 다음과 같다&lt;br /&gt;
* SVC; C-Support Vector Classification. The implementation is based on libsvm. &lt;br /&gt;
* LinearSVC; Scalable Linear Support Vector Machine for classification implemented using liblinear. Check the See also section of LinearSVC for more comparison element.&lt;br /&gt;
&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50331</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50331"/>
		<updated>2016-07-26T10:45:57Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
설치 방법&lt;br /&gt;
   $ sudo pip install sklearn&lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 ## And gate&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 ## Or gate&lt;br /&gt;
 y_data = [0, 1, 1, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([1, 1, 1, 1])&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 ## Xor gate&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
&lt;br /&gt;
* non-linear는 분류하지만 linear는 분류하지 못하는 SVC?&lt;br /&gt;
* multi-class에서 분류 시에 voter의 결과에 따라 분류됨&lt;br /&gt;
** 결과에 0이 많으면(And gate) 모두 0이 되고 &lt;br /&gt;
 y_data = [0, 0, 0, 1]면 결과가 [0, 0, 0, 0]&lt;br /&gt;
** 1이 더 많은 경우(Or gate) 학습 결과가 다음과 같음&lt;br /&gt;
 y_data = [0, 1, 1, 1]면 결과가 [1, 1, 1, 1]&lt;br /&gt;
&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50330</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50330"/>
		<updated>2016-07-26T10:44:58Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
설치 방법&lt;br /&gt;
   $ sudo pip install sklearn&lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 ## And gate&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 ## Or gate&lt;br /&gt;
 y_data = [0, 1, 1, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([1, 1, 1, 1])&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 ## Xor gate&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
&lt;br /&gt;
* non-linear는 분류하지만 linear는 분류하지 못하는 SVC?&lt;br /&gt;
* multi-class에서 분류 시에 voter의 결과에 따라 분류됨&lt;br /&gt;
** 결과에 0이 많으면(And gate) 모두 0이 되고 &lt;br /&gt;
 y_data = [0, 0, 0, 1]면 결과가 [0, 0, 0, 0]&lt;br /&gt;
** 1이 더 많으면(Or gate) &lt;br /&gt;
 y_data = [0, 1, 1, 1]면 결과가 [1, 1, 1, 1]&lt;br /&gt;
* and gate는multi class가 아님&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50329</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50329"/>
		<updated>2016-07-26T10:17:57Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
설치 방법&lt;br /&gt;
   $ sudo pip install sklearn&lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50328</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50328"/>
		<updated>2016-07-26T10:17:47Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
* 설치 방법&lt;br /&gt;
   $ sudo pip install sklearn&lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50327</id>
		<title>머신러닝스터디/2016/2016 07 23</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.zeropage.org/index.php?title=%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D%EC%8A%A4%ED%84%B0%EB%94%94/2016/2016_07_23&amp;diff=50327"/>
		<updated>2016-07-26T10:17:28Z</updated>

		<summary type="html">&lt;p&gt;211.104.144.68: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[머신러닝스터디/2016]]&lt;br /&gt;
[[머신러닝스터디/2016/목차]]&lt;br /&gt;
== 내용 ==&lt;br /&gt;
* SVM 실습 with [http://scikit-learn.org/stable/ sklearn]&lt;br /&gt;
** skflow(tensorflow의 contib/learn으로 흡수됨)을 사용하려 했으나 svm모듈 부분이 최신 커밋에만 포함되어 있어 sklearn을 사용하기로 함&lt;br /&gt;
** sklearn 버전은 0.17.1&lt;br /&gt;
    $ sudo pip install sklearn&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
=== 코드 ===&lt;br /&gt;
 import sklearn&lt;br /&gt;
 from sklearn import svm&lt;br /&gt;
 &lt;br /&gt;
 #### SVC with rbf kernel&lt;br /&gt;
 # default is rbf kernel&lt;br /&gt;
 clf = svm.SVC()&lt;br /&gt;
 x_data = [[0,0], [0,1], [1,0], [1,1]]&lt;br /&gt;
 &lt;br /&gt;
 # linear&lt;br /&gt;
 y_data = [0, 0, 0, 1]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;rbf&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0]) # wrong&lt;br /&gt;
 &lt;br /&gt;
 # non-linear&lt;br /&gt;
 y_data = [0, 1, 1, 0]&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 1, 1, 0]) # Correct answer&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### SVC with Linear kernel&lt;br /&gt;
 clf = svm.SVC(kernel=&#039;linear&#039;)&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,&lt;br /&gt;
 #   decision_function_shape=None, degree=3, gamma=&#039;auto&#039;, kernel=&#039;linear&#039;,&lt;br /&gt;
 #   max_iter=-1, probability=False, random_state=None, shrinking=True,&lt;br /&gt;
 #   tol=0.001, verbose=False)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 0])&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #### LinearSVC&lt;br /&gt;
 clf = svm.LinearSVC()&lt;br /&gt;
 clf.fit(x_data, y_data)&lt;br /&gt;
 # LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,&lt;br /&gt;
 #      intercept_scaling=1, loss=&#039;squared_hinge&#039;, max_iter=1000,&lt;br /&gt;
 #      multi_class=&#039;ovr&#039;, penalty=&#039;l2&#039;, random_state=None, tol=0.0001,&lt;br /&gt;
 #      verbose=0)&lt;br /&gt;
 clf.predict(x_data)&lt;br /&gt;
 # array([0, 0, 0, 1]) # Correct answer&lt;br /&gt;
== 다음 시간에는 ==&lt;br /&gt;
== 더 보기 ==&lt;br /&gt;
&lt;/div&gt;</summary>
		<author><name>211.104.144.68</name></author>
	</entry>
</feed>