{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# **ch7 추천엔진 과 모델의 평가**\n",
"- 머신러닝과 통계 [**(자료다운)**](http://acornpub.co.kr/book/statistics-machine-learning) 자료 다운로드\n",
"- movielens 데이터를 사용하여 분석합니다"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## **1 데이터 불러오기**\n",
"- **CSV 영화 데이터** 불러오기\n",
"- 1개의 테이블로 묶고, 이를 **user/ movie Pivot Table로** 변환\n",
"- 연산의 용이성을 위해 **numpy Matrix로** 변환"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" userId movieId rating timestamp\n",
"0 1 31 2.5 1260759144\n",
"1 1 1029 3.0 1260759179\n",
" movieId title genres\n",
"0 1 Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy\n",
"1 2 Jumanji (1995) Adventure|Children|Fantasy\n"
]
}
],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"ratings = pd.read_csv(\"data/ml-latest-small/ratings.csv\")\n",
"print (ratings.head(2))\n",
"movies = pd.read_csv(\"data/ml-latest-small/movies.csv\")\n",
"print (movies.head(2))"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" userId | \n",
" movieId | \n",
" rating | \n",
" title | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 1 | \n",
" 31 | \n",
" 2.5 | \n",
" Dangerous Minds (1995) | \n",
"
\n",
" \n",
" 1 | \n",
" 1 | \n",
" 1029 | \n",
" 3.0 | \n",
" Dumbo (1941) | \n",
"
\n",
" \n",
" 2 | \n",
" 1 | \n",
" 1061 | \n",
" 3.0 | \n",
" Sleepers (1996) | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" userId movieId rating title\n",
"0 1 31 2.5 Dangerous Minds (1995)\n",
"1 1 1029 3.0 Dumbo (1941)\n",
"2 1 1061 3.0 Sleepers (1996)"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 영화제목과 평점정보를 1개의 DataFrame으로 묶음\n",
"ratings = pd.merge(ratings[['userId', 'movieId', 'rating']], \n",
" movies[['movieId', 'title']],\n",
" how='left', left_on='movieId', right_on='movieId')\n",
"ratings.head(3)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" movieId | \n",
" 1 | \n",
" 2 | \n",
" 3 | \n",
" 4 | \n",
" 5 | \n",
" 6 | \n",
" 7 | \n",
" 8 | \n",
" 9 | \n",
" 10 | \n",
" ... | \n",
" 161084 | \n",
" 161155 | \n",
" 161594 | \n",
" 161830 | \n",
" 161918 | \n",
" 161944 | \n",
" 162376 | \n",
" 162542 | \n",
" 162672 | \n",
" 163949 | \n",
"
\n",
" \n",
" userId | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
"
\n",
" \n",
" \n",
" \n",
" 1 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 2 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 4.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 3 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
"
\n",
"
3 rows × 9066 columns
\n",
"
"
],
"text/plain": [
"movieId 1 2 3 4 5 6 7 8 \\\n",
"userId \n",
"1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
"movieId 9 10 ... 161084 161155 161594 161830 161918 \\\n",
"userId ... \n",
"1 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 4.0 ... 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
"movieId 161944 162376 162542 162672 163949 \n",
"userId \n",
"1 0.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
"[3 rows x 9066 columns]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 영화명 필드, 사용자 인덱스 Pivot Table을 생성합니다\n",
"rp = ratings.pivot_table(columns = ['movieId'], \n",
" index = ['userId'], values = 'rating')\n",
"rp = rp.fillna(0)\n",
"rp.head(3)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" ...,\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [4., 0., 0., ..., 0., 0., 0.],\n",
" [5., 0., 0., ..., 0., 0., 0.]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 연산이 용이하도록 numpy matrix로 변환합니다\n",
"rp_mat = rp.values # as_matrix()\n",
"rp_mat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## **2 내용 기반의 협업 필터링** (Cosin 유사도 측정)\n",
"### **01 User Based Table** (사용자 Cosin 유사도 측정)\n",
"- **Numpy Matrix** 간의 유사도를 측정합니다\n",
"- 뒤에 이어질 **내용기반 필터링** 방법에서도 동일하게 적용됩니다\n",
"- **Pivot Table** 의 **Cosin 유사도를** 측정하다보니 시간이 오래걸림\n",
"- from **sklearn.metrics.pairwise** import **linear_kernel** 가 더 빠르더라"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A 과 B 행렬의 Cosin 유사도 : 0.822\n"
]
}
],
"source": [
"# The cosine of the angle between them is about 0.822.\n",
"from scipy.spatial.distance import cosine\n",
"a = np.asarray([2, 1, 0, 2, 0, 1, 1, 1])\n",
"b = np.asarray([2, 1, 1, 1, 1, 0, 1, 1])\n",
"print (\"A 과 B 행렬의 Cosin 유사도 : {:.3f}\".format(1-cosine(a, b)))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"userId 1 2 3 4 5 6 7 \\\n",
"userId \n",
"1 0.000000 0.000000 0.000000 0.074482 0.016818 0.000000 0.083884 \n",
"2 0.000000 0.000000 0.124295 0.118821 0.103646 0.000000 0.212985 \n",
"3 0.000000 0.124295 0.000000 0.081640 0.151531 0.060691 0.154714 \n",
"4 0.074482 0.118821 0.081640 0.000000 0.130649 0.079648 0.319745 \n",
"5 0.016818 0.103646 0.151531 0.130649 0.000000 0.063796 0.095888 \n",
"\n",
"userId 8 9 10 ... 662 663 664 \\\n",
"userId ... \n",
"1 0.000000 0.012843 0.000000 ... 0.000000 0.000000 0.014474 \n",
"2 0.113190 0.113333 0.043213 ... 0.477306 0.063202 0.077745 \n",
"3 0.249781 0.134475 0.114672 ... 0.161205 0.064198 0.176134 \n",
"4 0.191013 0.030417 0.137186 ... 0.114319 0.047228 0.136579 \n",
"5 0.165712 0.086616 0.032370 ... 0.191029 0.021142 0.146173 \n",
"\n",
"userId 665 666 667 668 669 670 671 \n",
"userId \n",
"1 0.043719 0.000000 0.000000 0.000000 0.062917 0.000000 0.017466 \n",
"2 0.164162 0.466281 0.425462 0.084646 0.024140 0.170595 0.113175 \n",
"3 0.158357 0.177098 0.124562 0.124911 0.080984 0.136606 0.170193 \n",
"4 0.254030 0.121905 0.088735 0.068483 0.104309 0.054512 0.211609 \n",
"5 0.224245 0.139721 0.058252 0.042926 0.038358 0.062642 0.225086 \n",
"\n",
"[5 rows x 671 columns]\n",
"CPU times: user 1min 34s, sys: 16 ms, total: 1min 34s\n",
"Wall time: 1min 34s\n"
]
}
],
"source": [
"%%time\n",
"# User similarity matrix\n",
"m, n = rp.shape\n",
"mat_users = np.zeros((m, m))\n",
"for i in range(m):\n",
" for j in range(m):\n",
" if i != j: \n",
" mat_users[i][j] = (1-cosine(rp_mat[i,:], rp_mat[j,:]))\n",
" else: \n",
" mat_users[i][j] = 0.\n",
" \n",
"pd_users = pd.DataFrame(mat_users, index=rp.index, columns=rp.index )\n",
"print(pd_users.head(2))"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Similar users as user: 17\n",
" score\n",
"userId \n",
"596 0.379128\n",
"23 0.374641\n",
"355 0.329605\n",
"430 0.328872\n",
"608 0.319770\n",
"509 0.319313\n",
"105 0.309477\n",
"457 0.308201\n",
"15 0.307179\n",
"461 0.299035\n"
]
}
],
"source": [
"# 사용자 기반 유사도 측정\n",
"def topn_simusers(uid=16, n=5):\n",
" users = pd_users.loc[uid, :].sort_values(ascending=False)\n",
" topn_users = users.iloc[:n, ]\n",
" topn_users = topn_users.rename('score') \n",
" print (\"Similar users as user:\", uid)\n",
" return pd.DataFrame(topn_users)\n",
"\n",
"# 17번 사용자와 유사한 10명의 ID 정보를 출력\n",
"print(topn_simusers(uid=17, n=10))"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Top 10 movie ratings of user: 596\n",
" userId movieId rating title\n",
"89645 596 4262 5.0 Scarface (1983)\n",
"89732 596 6874 5.0 Kill Bill: Vol. 1 (2003)\n",
"89353 596 194 5.0 Smoke (1995)\n",
"89546 596 2329 5.0 American History X (1998)\n",
"89453 596 1193 5.0 One Flew Over the Cuckoo's Nest (1975)\n",
"89751 596 8132 5.0 Gladiator (1992)\n",
"89579 596 2858 5.0 American Beauty (1999)\n",
"89365 596 296 5.0 Pulp Fiction (1994)\n",
"89587 596 2959 5.0 Fight Club (1999)\n",
"89368 596 318 5.0 Shawshank Redemption, The (1994)\n"
]
}
],
"source": [
"# 사용자가 선호하는 영화목록을 출력하는 함수\n",
"def topn_movieratings(uid=355, n_ratings=10): \n",
" uid_ratings = ratings.loc[ratings['userId'] == uid]\n",
" uid_ratings = uid_ratings.sort_values(by='rating', ascending=[False])\n",
" print (\"Top {} movie ratings of user: {}\".format(n_ratings, uid))\n",
" return uid_ratings.iloc[:n_ratings, ] \n",
"\n",
"# 596번 사용자가 선호하는 영화목록 10개\n",
"print(topn_movieratings(uid=596, n_ratings=10))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### **02 Item Based Table** (영화 Cosin 유사도 측정)\n",
"- **Numpy Matrix** 간의 유사도를 측정합니다\n",
"- 뒤에 이어질 **내용기반 필터링** 방법에서도 동일하게 적용됩니다\n",
"- **Pivot Table** 의 **Cosin 유사도를** 측정하다보니 시간이 오래걸림\n",
"- from **sklearn.metrics.pairwise** import **linear_kernel** 가 더 빠르더라"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 145. , 0. , 0. , ..., 16. , 0. , 9. ],\n",
" [ 0. , 985. , 101.5 , ..., 16. , 119. , 152. ],\n",
" [ 0. , 101.5 , 677. , ..., 44.5 , 79. , 189.5 ],\n",
" ...,\n",
" [ 16. , 16. , 44.5 , ..., 446. , 20. , 77. ],\n",
" [ 0. , 119. , 79. , ..., 20. , 494. , 217.5 ],\n",
" [ 9. , 152. , 189.5 , ..., 77. , 217.5 , 1831.25]])"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# cosine_sim 코싸인 유사도 행렬\n",
"# TF-IDF Vectorizer간 Dot Product 계산시 Cosine Similarity Score 제공\n",
"from sklearn.metrics.pairwise import linear_kernel\n",
"mat_movies = linear_kernel(rp_mat, rp_mat)\n",
"mat_movies"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(671, 671)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from scipy.spatial.distance import cdist\n",
"mat_movies = cdist(rp_mat, rp_mat)\n",
"mat_movies.shape"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(671, 671)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.metrics import pairwise_distances\n",
"mat_movies = pairwise_distances(rp_mat, metric='manhattan')\n",
"mat_movies.shape"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"# %%time\n",
"# # 개별 영화간 유사도 측정\n",
"# mat_movies = np.zeros((n, n))\n",
"# for i in range(n):\n",
"# for j in range(n):\n",
"# if i != j: mat_movies[i,j] = (1-cosine(rp_mat[:,i], rp_mat[:,j]))\n",
"# else: mat_movies[i,j] = 0.\n",
"\n",
"# # 대략 56min 5s 소요\n",
"# print(mat_movies.shape)\n",
"# pd_movies = pd.DataFrame(mat_movies, index=rp.columns ,columns=rp.columns )\n",
"# pd_movies.to_csv('data/pd_movies.csv', sep=',')"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Movies similar to movie id: 589, Terminator 2: Judgment Day (1991), are\n",
" 589 movieId title\n",
"0 0.702256 480 Jurassic Park (1993)\n",
"1 0.636392 1240 Terminator, The (1984)\n",
"2 0.633428 110 Braveheart (1995)\n",
"3 0.619415 356 Forrest Gump (1994)\n",
"4 0.614814 377 Speed (1994)\n",
"5 0.605887 380 True Lies (1994)\n",
"6 0.604555 457 Fugitive, The (1993)\n",
"7 0.591071 593 Silence of the Lambs, The (1991)\n",
"8 0.579325 367 Mask, The (1994)\n",
"9 0.577299 1036 Die Hard (1988)\n",
"10 0.576275 592 Batman (1989)\n",
"11 0.568341 296 Pulp Fiction (1994)\n",
"12 0.564779 1196 Star Wars: Episode V - The Empire Strikes Back...\n",
"13 0.562415 260 Star Wars: Episode IV - A New Hope (1977)\n",
"14 0.553626 47 Seven (a.k.a. Se7en) (1995)\n"
]
}
],
"source": [
"pd_movies = pd.read_csv(\"data/pd_movies.csv\",index_col='movieId')\n",
"\n",
"# Finding similar movies\n",
"def topn_simovies(mid = 588,n=15):\n",
" mid_ratings = pd_movies.loc[mid,:].sort_values(ascending = False)\n",
" topn_movies = pd.DataFrame(mid_ratings.iloc[:n,])\n",
" topn_movies['index1'] = topn_movies.index\n",
" topn_movies['index1'] = topn_movies['index1'].astype('int64')\n",
" topn_movies = pd.merge(topn_movies, movies[['movieId','title']],\n",
" how='left', left_on='index1', right_on='movieId')\n",
" print (\"Movies similar to movie id: {}, {}, are\".format(\n",
" mid, \n",
" movies['title'][movies['movieId']==mid].to_string(index=False)))\n",
" del topn_movies['index1']\n",
" return topn_movies\n",
"\n",
"# 589번 사용자가 유사한 영화목록 15개 출력\n",
"print (topn_simovies(mid=589, n=15))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## **3 ALS** (Alternating Least Squares) **를 사용한 협업 필터링**\n",
"### **01 희소행렬의 생성**\n",
"- **평점 희소행렬을** (데이터 유무로 0,1) 사용하여 연산을 진행 합니다\n",
"- 연산 및 연산후 정렬을 쉽게 연산할 수 있도록 도와주는 행렬을 활용합니다"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Shape of Original Sparse Matrix (671, 9066)\n"
]
},
{
"data": {
"text/plain": [
"array([[0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" ...,\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [4., 0., 0., ..., 0., 0., 0.],\n",
" [5., 0., 0., ..., 0., 0., 0.]])"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# ratings = pd.read_csv(\"data/ml-latest-small/ratings.csv\")\n",
"# movies = pd.read_csv(\"data/ml-latest-small/movies.csv\")\n",
"# rp = ratings.pivot_table(columns=['movieId'], index=['userId'], values='rating')\n",
"# rp = rp.fillna(0)\n",
"A = rp.values\n",
"print (\"\\nShape of Original Sparse Matrix\", A.shape)\n",
"A"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" ...,\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [1., 0., 0., ..., 0., 0., 0.],\n",
" [1., 0., 0., ..., 0., 0., 0.]])"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 0.5~5 평점이 존재하면1, 없으면 0의 희소행렬을 생성\n",
"W = A > 0.5\n",
"W [W == True] = 1\n",
"W [W == False] = 0\n",
"W = W.astype(np.float64,copy=False)\n",
"W"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[0., 1., 1., ..., 1., 1., 1.],\n",
" [1., 0., 1., ..., 1., 1., 1.],\n",
" [1., 1., 0., ..., 1., 1., 1.],\n",
" ...,\n",
" [1., 1., 1., ..., 1., 1., 1.],\n",
" [0., 1., 1., ..., 1., 1., 1.],\n",
" [0., 1., 1., ..., 1., 1., 1.]])"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# W 와 반대조건의 Table을 생성\n",
"# 예측된 평가행렬을 W_pred 와 곱하면 0 이 되도록 만들기 위함\n",
"# 연산 후 내림차순으로 정렬을 쉽게 도와주는 행렬\n",
"W_pred = A < 0.5\n",
"W_pred[W_pred==True] = 1\n",
"W_pred[W_pred==False] = 0\n",
"W_pred = W_pred.astype(np.float64, copy=False)\n",
"np.fill_diagonal(W_pred, val=0)\n",
"W_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### **02 근사행렬을 활용한 예측모델 생성**\n",
"- **평점 희소행렬을** (데이터 유무로 0,1) 사용하여 연산을 진행 합니다\n",
"- **연산** 및 **연산 후 정렬을** 쉽게 연산할 수 있도록 도와주는 행렬을 활용합니다"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" 0 반복완료 RMSE is: 3.2415\n",
" 10 반복완료 RMSE is: 1.7181\n",
" 20 반복완료 RMSE is: 1.7081\n",
" 30 반복완료 RMSE is: 1.7042\n",
" 40 반복완료 RMSE is: 1.7020\n",
" 50 반복완료 RMSE is: 1.7007\n",
" 60 반복완료 RMSE is: 1.6997\n",
"영화폄점 최종 모델의 RMSE: 1.6990904715763828\n"
]
}
],
"source": [
"# Parameters\n",
"m, n = A.shape\n",
"n_iterations = 70 # 학습을 위한 반복횟수\n",
"n_factors = 100 # 잠재요인\n",
"lmbda = 0.1 # 학습률\n",
"\n",
"X = 5 * np.random.rand(m, n_factors)\n",
"Y = 5 * np.random.rand(n_factors, n)\n",
"\n",
"# RMSE 오차계산 함수를 정의합니다\n",
"def get_error(A, X, Y, W):\n",
" return np.sqrt(np.sum((W * (A - np.dot(X, Y)))**2)/np.sum(W))\n",
"\n",
"errors = []\n",
"for itr in range(n_iterations):\n",
" X = np.linalg.solve(np.dot(Y,Y.T) + lmbda*np.eye(n_factors), np.dot(Y,A.T)).T\n",
" Y = np.linalg.solve(np.dot(X.T,X) + lmbda*np.eye(n_factors), np.dot(X.T,A)) \n",
" if itr % 10 == 0:\n",
" print(\"{:3} 반복완료 RMSE is: {:.4f}\".format(itr,get_error(A,X,Y,W)))\n",
" errors.append(get_error(A, X, Y, W))\n",
"\n",
"# 최종 예측행렬을 생성 합니다\n",
"A_hat = np.dot(X, Y)\n",
"print (\"영화폄점 최종 모델의 RMSE: \",get_error(A,X,Y,W)) "
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAEUCAYAAAAr20GQAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XmcZFV99/HPt9eBHsABJ4JRRFkkiprIuBuVBBMxRnAJooiaBx3ciILGoKKiYFAUfWFUZBTiAhIXlCW4DMimqDwZhKghCuMjEiKMAyM4MzBLd3+fP+6pmds1Vd01S3X3UN/361Uv6p577jm/qmnur+65954r20RERPTNdAARETE7JCFERASQhBAREUUSQkREAEkIERFRJCFERASQhBAREUUSQsx6kv5I0g8kXSlpcIq6b+xC/ztJOqq2fGA3+omYaUkIsUUkLZX08aayh0m6vAvdvQT4ru2DbK9v6vPPJZ1dK3pHF/rfDTi6sWD7etuf7kI/XSHpPZJ+IenqklgXS9qnrHu4pLWS3tNm209Kur+2/AhJ3yzJ+WpJJ9XWrZZ0Ve319q5/uNimBmY6gNhuDQB7Snqa7R/VyrrxN/Vg4PY26/rLK9rrBz5k+/MAkp4PnAIcUdb9X+AFkj5ie01jI0m7AvsBy2ptfRb4Z9tXtehnue3ndOMDxPTIEUJsjWOB0yUNt1op6UhJP6z9Ynxum3qSdFyt7mJJB5Z1n6b6df5PTUcCzW3sLukqYPfSxstK+UvL8vfKL9uHlvITJZ0g6QpJn5LUX/vle42kc0vZ84B/A/60tPNoSc+U9LkOYpekH0v6vKTvS7pB0qvKun5Ji8p2l0s6usXn+UFT2RJJcyWdXNr9nqR3T/mvNLGNAeAxwG9qxWPAucCrm6ovpEoANNUd35w+YztiO6+8NvsF3Fr++zrglPJ+L+Cq8v7PgWuBncvyQ4CfA49q0dYrgQuA4bK8D/BL4EFl+STgNW3ieA7w+ea4yvtHA18DBsryM4Dzam1eCQzV6u9Ye/9Z4JDmz9XcZwexjwNPLe93oTrSGQQeB3xniu/4O8De5f2TgC8COwH/vZn/VicBvwCuBm4FPgb01T8bMAL8sFY+CHyf6gii/p3uApxd2ti9qZ81pa3G68Uz/Xea1+a9coQQW8X2Z4EnSXpC06pDgY/b/kOptwz4EnBIi2YOA061vbbUXQpcQbUD3xrPBR4PXF6OHj5IlZgaLra9DkBSH/BGSZeVugcDf9RBH1PFfqftH5d191LtkPcAbgJukHSSpD3atP0F4OXl/VHA2bZXAv8q6TRJe3cQX8OHbD+b6nzM3rYn/Mq3vRq4HHhhKfo74ALbY0317rV9NHAOcJakl9dW32n7ObXXNzYjvpgFkhBiW3gj8EkmjuW3m0a31XDD5tTdHH1UO9H6Turg2vrf196/hWoo5XBX4+DfANRBH1PFvq6pfJTqV/iY7XcCZwIflvTiFm18E3h+GeZZAFwDYPs0quT2Zkn/0EGMG4O1rwdGJD2+xepPAm8q7/+e6kigXTs/B14EvFPSyObEELNXEkJsNdu/Ai4C6leVXAgcL2kXqMbEqX7lfrtFE98E3tU4FyFpX+AgqiGnzbVW0rzy/mrg/0ja8Etf0pw22z0KuMz27yXtBvxtvU1gXuvNtiz2ckTSOHI6BziyuY6rE7w/B/4JuMS2Jamsuxc4g03H/TtxGnBii/5+BywtVwf9ZzkaaY67/j08gmqoaU1zvdg+5Sqj2FJrm5Y/TrUTXAVg+1pJnwAulbSe6tf2W2zf2tyQ7S9LejBwhaR1wHrg5Y3hJqpf1aNt4hgrr4avAD+Q9F3bx0v6EPBtSavK+g8A32vR5ieAsyUdA9xPde6hcYRwJ/A7ST8GzgJuaWzbQezN39NoiffZks4A7gWGgLe2+XznAJdRnQ8B2E/S18t2c4D3t9muuc8Nn9X2YkkflLRf+az17+F04D+BP6mV1T/DhyU9jioJND5r4/ufX4bbGn5qe7OOYGJmyc4DcmL6SXodcLPtq2c6lgcCSU8Bnmz7X2Y6lth+JSFERASQcwgREVF09RxCualokOrE0822T2pafzmwtFZ0gu17uhlTRES01tWEYHvDBGCSviDp0bZ/2VTn9ZO1IWkh1R2TjIyMHLj//vt3JdaIiAeq66+//i7b86eqNy1XGZVL1eYzcU4UgFWSTqa6W/KacpPTBLYXAYsAFixY4CVLlnQ52oiIBxZJv5m6VveHjPahuizu6cBxzcNBtg8r9QScKelXtq/oZkwREdFaV08q215q+0hgX+DIcnNSq3oGLqGaZiAiImbAtFxlZHuUalqDoUmqPQv4j+mIJyIiNtW1ISNJTwSOp7pzdWeqibJua6pzOjCX6o7L62xvyVQFERGxDXQtIdj+CdXUwBNIOgt4r+1ltt/Wrf4jImLzTPtcRraPme4+IyJiarlTOSIigCSEiIgokhAiIgJIQoiIiCIJISIigCSEiIgokhAiIgLokYTw23vu55L//C2r1rZ7LG9ERPREQrjhtns49vwb+N/f3z/ToUREzFo9kRBGhvsBcoQQETGJnkgIc4erGTpWJyFERLTVEwlhJAkhImJKPZEQGkcIGTKKiGivJxJCjhAiIqbWIwmhOqm8et3YDEcSETF79URCGB7oZ7BfGTKKiJhETyQEqIaNMmQUEdFezySEucMDOUKIiJhETyWEHCFERLTXMwmhGjLKSeWIiHZ6KiGszBFCRERbPZMQ5g73Z8goImISA91sXNKngUFgBLjZ9klN6w8GjgNWA7fbPr5bsYwM5RxCRMRkunqEYPuNtl9n+xXAIyU9urFOkoB3Ai+2fThwn6TnNrchaaGkJZKWLF++fItjGclVRhERk5qWISNJ84D5wLJa8X7ATbbXluULgYOat7W9yPYC2wvmz5+/xTE0rjKyvcVtREQ8kHU1IUjaR9J5wE+ARbbvqa3eDVhRW15RyrpiZHiAccOa9ePd6iIiYrvW7SGjpbaPBPYFjpS0e2313cC82vKupawr5uYhORERk5qWISPbo0A/MFQrXgocIGm4LB8KXN2tGDLjaUTE5Lp2lZGkJwLHA6uAnYELbN/WWG97TNLJwHmSVgHLgcXdimckz0SIiJhU1xKC7Z8Ar2wul3QW8F7by2xfCVzZrRjq8hjNiIjJdfU+hFZsHzPdfUJtyGhdEkJERCs9dacywKrMZxQR0VLPJIScVI6ImFwSQkREAL2UEIZylVFExGR6JiH094kdBjPjaUREOz2TEKAxwV1OKkdEtNJTCSHPRIiIaK+nEsJInqscEdFWzyWEnFSOiGitpxLC3OGB3KkcEdFGTyWEasgoJ5UjIlrpqYQwd7g/Q0YREW30VEIYGcpJ5YiIdnorIQwPcN+6McbH81zliIhmPZUQ5mYK7IiItnoqIWyc4C4nliMimvVYQmg8EyFHCBERzXoqIeQxmhER7fVUQsgzESIi2uuphNA4QsiQUUTEpnoqIYzkKqOIiLZ6LCE0TirnKqOIiGYD3Wxc0pnAOLArcKntc5vWXw4srRWdYPuebsWTk8oREe11NSHYfgOAJAHXAOe2qPP6ydqQtBBYCLDnnntuVTw7DPbTpySEiIhWpmvIaBhY0aJ8laSTJX1J0utabWh7ke0FthfMnz9/q4KQxMhQnokQEdFKV48Qak4BTmsutH0YbDiCOFPSr2xf0c1A8tS0iIjWun6EIOk44Abb17arY9vAJcDjux3PyHB/pq6IiGihqwlB0huB1bbP66D6s4D/6GY8UJ1YzpBRRMSmujZkJOnpwAnAtyR9phS/x/byWp3TgbnAHOC6yY4itpUMGUVEtNa1hGD7h8AmlwVJOgt4r+1ltt/Wrf7bGRkeYMXq+6a724iIWW+6TipvYPuY6e6zbu7wQO5UjohooafuVIacVI6IaKcHE0JOKkdEtNJzCWHu0ADrRsdZPzY+06FERMwqPZcQ8kyEiIjWei4h5JkIERGt9VxC2HiEkBPLERF1PZcQ5s7JEUJERCu9lxDKQ3JyDiEiYqKeSwg5qRwR0VrvJYShDBlFRLTScwkhVxlFRLTWcwkhQ0YREa31XEIYGuhjqL+PVbnsNCJigp5LCNCY4C5HCBERdT2aEPKQnIiIZj2ZEPIYzYiITfVkQhjJQ3IiIjbRswkhJ5UjIibqyYQwNyeVIyI20ZMJYWQoJ5UjIpr1ZkLISeWIiE10lBAkfbtN+RMl/cW2Dan75pbLTm3PdCgREbPGwGQrJR1T6uwt6Y3AqO1FkgaBPwNOAI6cZPszgXFgV+BS2+c2rT8YOA5YDdxu+/it+TCdGhkeYNywZv04Owz1T0eXERGz3qQJAfgV0A8cW5ZHJY0AXwAOBN5k+/52G9t+A4AkAdcAGxJCKXsn8HzbayWdIum5ti/b4k/TocYzEVatHU1CiIgoJk0Iti9vs+qlkvqBj0vqt33JFP0MAyuayvYDbrK9tixfCLwYmJAQJC0EFgLsueeeU3TTmfoEd/N3Gt4mbUZEbO+mPIcgabGkcyQ9oVb2OeCzwC7Agg76OQU4ralsNyYmiRWlbALbi2wvsL1g/vz5HXQ1tZFMgR0RsYlOTioPUp0rOFbSS0rZMcAjgdcCz5xsY0nHATfYvrZp1d3AvNryrqWs6+ZmCuyIiE10khBs+3e2XwscKmlf4BHAHGAvQO02LCeiV9s+r8XqpcABkhpjNocCV29O8Ftqw5BRpq+IiNhgqpPKMHGH/4/AR4Gbge8ArwA+1nIj6elURxbfkvSZUvwe28sBbI9JOhk4T9IqYDmweIs+xWbaeFI501dERDR0khDe2Xhje5mki21/baqNbP8Q2OQssKSzgPfaXmb7SuDKzQl4W8hT0yIiNjVlQrD946blKZPBFO0dszXbbwtJCBERm+rNqSuGcpVRRESznkwI/X1ih8F+7rlv/UyHEhExa3RyH8LcNuXb5i6xGfKnD38Q5133G779sztmOpSIiFmhkyOE0xtvJH2yVn7itg9n+nzmqAN5/MMexJu+/BO+uuR/ZjqciIgZ10lCqNcZqb1ve//B9mCXHQb50tFP5hn7PJh3fP2nnPODX890SBERM6qjG9M6eL9d2nFogM+9egHPe+zufODfb+LUb/03v1u5ZqbDioiYEZrqmQCSLmPjzr8fGKM6OrDtv+pueBMtWLDAS5Ys2ebtjo6N865v/oyvLrkdCZ601678zeP24JADduePdp6zzfuLiJhOkq63PeW8c1MmhNmkWwmh4ZZlK7n0Z3fwrZ/dwc3LVgGw5647st9DdmL/3Xdiv9134lEPHuGhD9qBeTsOUs3gHRExu22zhCBpf9u/qC0fA/wx8FHbf9jqSDdDtxNC3S3LVrL4pmXcdMcf+OWdK/n1XasZG9/4XQ0N9LHHLnN4yM5z2G1kiHkjQ8zbcZB5Ow6x8w6D7DxngJ3mDDJ3eIC5cwbYcaifHQcH2GGon6GBnrzaNyJmSKcJoZOpK04AXlMafUMpuwo4A/j7LYxv1tv3ITux70N22rC8dnSMX/1uNbetWM0d967hznvXVP/9wxqW/m4Vv79vHb+/b/2EpNHOQLkPYniwnzmDfQwP9DFnsEoUQ/19DA/2M9Tfx9CAGOyvygYH+hjsEwP9fQz0i6H+Pgb6qvcDfaK/r6rb37dxeaBf9EkM9PXR30f1vpT194l+ib5St14mUSsDqdRtWtcn6Ku9V6Os1Kv/t09CgEq9iJh9OkkIBpC0M7DA9tFl+ZXdDGy2GR7o5zEP3ZnHPHTntnXGx83KNaP8Yc16Vq4ZZdXaUVauWc+qtaPct26seq0d5b71Y6xZP8aa9eOsXT/GmtEx1q4fZ93YOGtHx7n3/vWsGx1n/Vh5jVbr1o+Z0bFx1o+b9WPjbEejfRNI1UmoRsJQSRYTEgkbk0fLsvJ+Y7k2tL2hXTbWpWm5Wl/V2xhXvY+N61UabqwrzW3SFhP6aSxqQnvU1tX7rH8vGz5LrZxayYaYmvqZsNwU08T22rXfvg5N/Uzse/JYWn2Gdm00b9e2PZrqqHW7k7XT6mfJJn03t9tiQW0+U9vtWvXTIprmOk/aa1eetd+2eSZMO50khLWSDgdeCLy/Vp6zrU36+sQuOw6yy46D09Lf2LgZHR9ndMyMjlfJYsyuysfKf8fNuKvlcVfLY6VsrOn9uM34OIzZ2GZsnKrM7dfZZtxVLIayXJVV69lQp+UybCgzVVI1E+s26mys31gHpmqr8Z7mdtnYBqVOfdtGUm3Up7GeiX03lhvc1FbZsla30d/4hjLqsdTabSxsjKF8lvryhDgn/hJotU29v8Z29TfN6yeWTR7DxL4ntt+8bXPsreKmRd1NY9o0zk3ab/6sLRpq1d5U7W5cP/VnatW5mwrbfcaJdTYtPebZe8+KhHA81ZDRJ2zfUiv/Rlciio7194n+vn6GO/lXjIiYQieznd4HfLpF+de7ElFERMyIKRNC030I9VGttbZf0JWoIiJi2nUy2LCU6oa0s21f1+V4IiJihkx5QbztN1BdevosSV+T9CpJ03PWNCIipk1Hd0jZXmH7I8DhwN3A5ySd1M3AIiJiem3WLbOuroW6B1gP7NiViCIiYkZ0dMGipAHgCOBvgWuB42yv7GZgERExvTq5yuhE4DHAV2y/rPshRUTETOjkCOEI4A7gzZLeVMpELjuNiHhA6eTGtAO2pGFJ/cAHgANtP6/F+supLmltOMH2PVvSV0REbL1OhowE/DVwp+0bS9newIm2J5vt9AXAxcBT2lWw/frNCzciIrqlkyGjM6l+yb9I0nXAk4Eh4LTJNrJ9EUw61fEqSScDewHX2P5sq0qSFgILAfbcc88Owo2IiC3RSUIYsf1RSX3AD4Gjmia52yK2D4MNRyBnSvqV7Sta1FsELILqATlb229ERLTWyX0I9wPYHgd+si2SQV25t+ES4PHbst2IiNg8nRwhHCJpMdWVRftL2odtf5XRs6jON0RExAzp5Cqjh29lH+tbFUo6HZhL9aCd62xfu5X9RETEVuj6o1VsH9J4L+ks4L22l9l+W7f7joiIzk3rs7ZsHzOd/UVEROc2a3K7iIh44EpCiIgIIAkhIiKKJISIiACSECIiokhCiIgIIAkhIiKKJISIiACSECIiokhCiIgIIAkhIiKKJISIiACSECIiokhCiIgIIAkhIiKKJISIiACSECIiokhCiIgIIAkhIiKKJISIiACSECIiokhCiIgIoIsJQVK/pA9K+k6b9QdLulTSVyV9rFtxREREZ7p5hPAC4GJgoHmFJAHvBF5s+3DgPknP7WIsERExha4lBNsX2b6uzer9gJtsry3LFwIHtaooaaGkJZKWLF++vBuhRkQEM3cOYTdgRW15RSnbhO1FthfYXjB//vxpCS4iohfNVEK4G5hXW961lEVExAyZqYSwFDhA0nBZPhS4eoZiiYgIWpzw7YL1zQW2xySdDJwnaRWwHFg8DbFEREQbXU8Itg9pvJd0FvBe28tsXwlc2e3+IyKiM9NxhLCB7WOms7+IiOhc7lSOiAggCSEiIookhIiIAJIQIiKiSEKIiAggCSEiIookhIiIAJIQIiKiSEKIiAggCSEiIookhIiIAJIQIiKiSEKIiAggCSEiIookhIiIAJIQIiKiSEKIiAggCSEiIookhIiIAJIQIiKiSEKIiAggCSEiIoqBbjYu6UjgZcAY8CPbpzWtvwG4riyOAsfadjdjioiI1rqWECTtBBwFHGLbkr4kaV/bt9Sq3W379d2KISIiOtfNIaOnA5fVfvFfBBzUVKdf0qmSzpN0WKtGJC2UtETSkuXLl3cx3IiI3tbNIaPdgBW15RXAvvUKtg8CkDQIfE3SfzUdQWB7EbAIYMGCBRlOiojokm4eIdwNzKst71rKNmF7PXAZ8NguxhMREZPoZkK4DjhYksryC4FrJqn/NODGLsYTERGT6NqQke17JH0JOF/SKHCj7V/U60j6AnA/MBe40Pat3YonIiIm19XLTm2fD5xfL5N0AXC47THbr+5m/xER0bmuJoRWbL9kuvuMiIip5U7liIgAkhAiIqJIQoiICCAJISIiiiSEiIgAkhAiIqJIQoiICCAJISIiiiSEiIgAkhAiIqJIQoiICCAJISIiiiSEiIgAkhAiIqJIQoiICCAJISIiiiSEiIgAkhAiIqJIQoiICCAJISIiiiSEiIgAkhAiIqIY6Gbjko4EXgaMAT+yfdrmrI+IiOnTtSMESTsBRwGH2n4R8DhJ+3a6PiIiplc3jxCeDlxm22X5IuAg4JYO1wMgaSGwsCyukvTLLYznwcBdW7jtTNneYk683ZV4u+uBHO8jOqnUzYSwG7CitrwC2Hcz1gNgexGwaGuDkbTE9oKtbWc6bW8xJ97uSrzdlXi7e1L5bmBebXnXUtbp+oiImEbdTAjXAQdLUll+IXDNZqyPiIhp1LUhI9v3SPoScL6kUeBG27/odH0XbPWw0wzY3mJOvN2VeLur5+PVxnO600PSBcDhtsemteOIiJjUtCeEiIiYnXKnckREAF2+U3m22B7uiJbUD3wAOND280rZwcBxwGrgdtvHz2CIE0g6ExinujrsUtvnzvJ4Pw0MAiPAzbZPms3xAkgaAL4IrLR9zGyOV9INVBeKAIwCx9r2LI95b+DdZXEMeB/VvVCzbl8haX/grbWipwGvo7pUf9vFa/sB/QJ2Ar7DxuGxLwH7znRcLeI8FHgKcHlZFvA9YLgsnwI8d6bjbBG3gO9vL/GW2L4APHq2xwucBPwV8LnZ/v02/m5b/G3MyphLbF8Ddq2VbS/7in7g37sRby8MGbW7I3pWsX2R7etqRfsBN9leW5YvZBbGDQxT3VS4XcQraR4wH3gQszheSa8AlgA3l6LZ/v32SzpV0nmSDitlsznmJwH/A/xzifm1bCf7CuAlVLFt83h7YcioozuiZ6FWce82Q7FM5hTgNGZ5vJL2Ad5P9T/RcVS/smZlvJL+DNjd9pcl7VWKZ/X3a/sgAEmDwNck/RezO+a9gAOAF9peU4ZA/xi4rVZntu4rXgO8uLy26b6tF44Qttc7omd93JKOA26wfS2zPF7bS20fSfU/zJFU5xNma7xHAI+W9Bngg8AzqH7RztZ4N7C9HrgMeCyz+2/iPqpf12vK8sXAGmZvvABI+kvgxyXubf799kJC2F7viF4KHCBpuCwfClw9g/FMIOmNwGrb55WiWR1vg+1RqqODW5ml8dr+J9vH2H491UnPa4FPMkvjbeFpwI3M7r+J64En15afQjWx5mzfV7wZ+HR5v833bQ/4ISNP/x3RW2s9gO0xSScD50laBSwHFs9oZIWkpwMnAN8qv2IB3gPM1nifCBwPrAJ2Bi6w/ZvZ+v02GQNGZ/PfA4CkLwD3A3OBC23fWspnZcy275C0WNL5VFdA3Wr7GyV5zcp9haQnAP9r+y7ozr4tN6ZFRATQG0NGERHRgSSEiIgAkhAiIqJIQoiICCAJISIiiiSEaEvS15qWv9KizjmSdmyz/cmSriqvp5ay90h6XJv6F5VJ/lqtO6zW1iNK2VGS/rZF3QdL+tepPyFIemq7PmN6lMuYYxZIQggkfbX2/tWSXlgWB5uq7lPbKV8l6Srg+TT9HanyMarJt24sryMkvYXqprBNdsCSjgXuAj7WvIOW9FbgYODn5fWPkl7Wri2q+yEeKelRU3zu3YBnlWv8l0o6qrZuD0lnT7Z9J0oCfNrWtjNJ+32SLpD0kabyP5f0zvL+lZKO2AZ9Paxx34mk/nLvwbawsv7dx8xJQgiAvSSdIOkE4AW0/7v4te3n1F9Usy1OUCbb+hBwDvC/wC+Aj1HNMjqBpBeWI4+1to8GLgW+KulVkuaUav8C/Bp4JLAHcIXtVkcrw5I+Afw/qnlePiLpJbU7OZu9CTi3vP8NcFiZ/A7aJ5vNta3aaWcP4F7b/zhJvwNsm5tQN7Rje8z2q7dBm9j+GfAUVdN9xwzKP0AA3Gb7QwCSXlMrf2Y5Clho+2Zgp7Lc2MGaakKwtWyqn2rSu09TzSz6cdsvKfvmz0labPtdVMnnaNurAGwvlnQl8JdUz1sAeALwJ1SzPO4IfEPSRfXOJD0eOBU4w/biUnY48FrgXElH2R5nor1s/7b2Wd5H9UyKY5vaPhs40fYdZflbtp8v6X3AQ4B7gUcBlwBPBXYBvmj7stLEUZJeAOwDXGz7i5IeTnUkc0/5TG+zvVLSt4DfU80S+sFaDPtTTYd9b+P7BG4v8T5Z0tttf7T5H0HSM4FXAOOS7isxfpzqDuh5wKm2/7scJd4CPMr2yyWdRHVX90DZ5mfAe0tf77B9Wu172CQ22z+WdA7wO6pnUOwBfLSUHws8hmo+oc+Wu2uXUE158f3mzxDTJwkhYOIv2Pr7H9g+rAwPPIjqQRxQ7az3BBrj9COS7rO9rrbtg4HrbF9cfqG/qLbutbZvlPQpqknQ3trmR/zbgYNt/0TSYuACqukPTijDPBsq2v4p8Df1jV09t/us8ppA0i7AH5rq/1zSaklPoTqyqX8n9e9lqNEM8H3b50s6hCpxvkjSEPBNqkneAG6x/VFJfcD3JX0Z+DDwDtu3S3oesBA4nWp+nUfZnhAbcAbwStvLy/QKi4HnUCWEE1slg/KZflD6Gy1TM7yB6t/138p3cDbwUqp/rzPKRIVQzfX0ZKrpPt5s+1BJjb4aD2FpfA+bxCbpOVTJ/ru2r5S0O9WR3t8BzwNeY3t5LdQbgWeRhDCjkhACYJWkq6l+JY8Bb2la/3CqnUaz19be/wD4cWPB9s8kPVbS+6l2pieXVXdTzSqJ7TdJeinVPDJLGtuWYZuFtj9cTkD/S1n1MKrZSh8naR3wFeCesgP+DpMPgS6x/fba8g5Uv1CbnQx8GfiHVo20OAH9P+W/K4Gfls+1rqnejaV8XNL/o5oCem/gzSWpzWFjAvp5i2QA0N/YgdpeK+m3bNlU0o+jenbBn9bibvgRVCfwgQOphtRGqJ5jMJnJYrullN9ZG447GviH8h2dYvs+qnmQWl6cENMnCSGwfZSkr9tu3ulfUdbfKunzVE+Yav4pvxI4rPwab/Yw4Nnl/TPLzm8tcH6tzu5UwyZ1w1TnCwD+C2g8cGWUaqfUDZYpAAACHklEQVT/Ntvvk/RkoHFk8heNjctY9L+1+Dx1d1E9KGcC26tVXaFUHza6l2rI43aqWTHbTQDWrvwZwOXlSGmP0vdtVEMry5rqjrZpY72k+bVf4bvbvkvS3Db168bY+P/6LVTngpp38uO1IbV9gW/bGx6B6RbtdBJby2Bs3wm8R9VDgF5DNaz4MKrvN2ZQEkI0bPK3YPsTtfd30eJpTJIWUQ03NO/YKMMYH22q/wGq8w531YpPl/T72vIQG39tj1MShqqHr5wBDEp6k+1PdfrhWsQ2KglJfaWPdbV1F0p6FdVYPsBngQ9J+jXVQ0gaQx1j5dX8HsqstaVsnqQPU+30zijDXScCn5S0girJnWz7tnocTY4DzpD0B6px+hPa9EuL8uuBM8sQ0VnApyT9TVn/Tdvfber3fKqrvZ5HlQzvLOV3AI+QdDrV7LaNbdrFNtrqO5F0BtUVbA8B3lXWPRv4DDGjMttpACDp36mmLq4zcESLX7H17T4DnFR+9XXSz0lU0yPfWJaPBN7Bxp0vVAnhStvvLnWGqU5a/jFwpu0fqbr/4DBgqe1Tm/roB863ffgUsfw1MGT7kk5ij+4oQ34fsn38TMfS65IQYquUYZsbm04oT1b/kcCyMm68Of080vavW5Tv3mkyatPuu6mutGm+AimmiaS/p3p6WYaMZlgSQkREALkxLSIiiiSEiIgAkhAiIqJIQoiICCAJISIiiv8PuV+Op6R+Xu0AAAAASUVORK5CYII=\n",
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"%matplotlib inline\n",
"from matplotlib import rc\n",
"from matplotlib import pyplot as plt\n",
"rc('font', family=['NanumGothic','Malgun Gothic'])\n",
"\n",
"plt.plot(errors)\n",
"plt.ylim([0, 3.5])\n",
"plt.xlabel(\"반복 학습횟수 (Number of Iterations)\")\n",
"plt.ylabel(\"RMSE 값\")\n",
"plt.title(\"No.of Iterations vs. RMSE\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Top 10 movies predicted for the user: 355 based on collaborative filtering\n",
"\n"
]
},
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" pred_ratings | \n",
" movieId | \n",
" title | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 2.389161 | \n",
" 1923 | \n",
" There's Something About Mary (1998) | \n",
"
\n",
" \n",
" 1 | \n",
" 2.323406 | \n",
" 1213 | \n",
" Goodfellas (1990) | \n",
"
\n",
" \n",
" 2 | \n",
" 2.179170 | \n",
" 5010 | \n",
" Black Hawk Down (2001) | \n",
"
\n",
" \n",
" 3 | \n",
" 2.165468 | \n",
" 1197 | \n",
" Princess Bride, The (1987) | \n",
"
\n",
" \n",
" 4 | \n",
" 1.937502 | \n",
" 8798 | \n",
" Collateral (2004) | \n",
"
\n",
" \n",
" 5 | \n",
" 1.934248 | \n",
" 2987 | \n",
" Who Framed Roger Rabbit? (1988) | \n",
"
\n",
" \n",
" 6 | \n",
" 1.890346 | \n",
" 8622 | \n",
" Fahrenheit 9/11 (2004) | \n",
"
\n",
" \n",
" 7 | \n",
" 1.868384 | \n",
" 5903 | \n",
" Equilibrium (2002) | \n",
"
\n",
" \n",
" 8 | \n",
" 1.864697 | \n",
" 8957 | \n",
" Saw (2004) | \n",
"
\n",
" \n",
" 9 | \n",
" 1.858035 | \n",
" 4370 | \n",
" A.I. Artificial Intelligence (2001) | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" pred_ratings movieId title\n",
"0 2.389161 1923 There's Something About Mary (1998)\n",
"1 2.323406 1213 Goodfellas (1990)\n",
"2 2.179170 5010 Black Hawk Down (2001)\n",
"3 2.165468 1197 Princess Bride, The (1987)\n",
"4 1.937502 8798 Collateral (2004)\n",
"5 1.934248 2987 Who Framed Roger Rabbit? (1988)\n",
"6 1.890346 8622 Fahrenheit 9/11 (2004)\n",
"7 1.868384 5903 Equilibrium (2002)\n",
"8 1.864697 8957 Saw (2004)\n",
"9 1.858035 4370 A.I. Artificial Intelligence (2001)"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 위에서 반복학습한 행렬을 곱하여 최종 예측행렬을 출력합니다\n",
"def print_recommovies(uid=315, n_movies=15, pred_mat=A_hat, wpred_mat=W_pred):\n",
" pred_recos = pred_mat * wpred_mat\n",
" pd_predrecos = pd.DataFrame(pred_recos, index=rp.index ,columns=rp.columns)\n",
" pred_ratings = pd_predrecos.loc[uid,:].sort_values(ascending = False)\n",
" pred_topratings = pred_ratings[:n_movies,]\n",
" pred_topratings = pred_topratings.rename('pred_ratings') \n",
" pred_topratings = pd.DataFrame(pred_topratings)\n",
" pred_topratings['index1'] = pred_topratings.index\n",
" pred_topratings['index1'] = pred_topratings['index1'].astype('int64')\n",
" pred_topratings = pd.merge(pred_topratings,movies[['movieId','title']],how = 'left',left_on ='index1' ,right_on = 'movieId')\n",
" del pred_topratings['index1'] \n",
" print (\"\\nTop\",n_movies,\"movies predicted for the user:\",uid,\" based on collaborative filtering\\n\")\n",
" return pred_topratings\n",
"\n",
"# 355번 사용자가 선호하는 영화목록 10편을 출력합니다\n",
"# 연산에 활용할 예측행렬, 희소행렬을 입력합니다\n",
"print_recommovies(uid=355, n_movies=10, pred_mat=A_hat, wpred_mat=W_pred)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Top 5 movies predicted for the user: 11 based on collaborative filtering\n",
"\n"
]
},
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" pred_ratings | \n",
" movieId | \n",
" title | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 1.098535 | \n",
" 2959 | \n",
" Fight Club (1999) | \n",
"
\n",
" \n",
" 1 | \n",
" 0.979711 | \n",
" 608 | \n",
" Fargo (1996) | \n",
"
\n",
" \n",
" 2 | \n",
" 0.787313 | \n",
" 99114 | \n",
" Django Unchained (2012) | \n",
"
\n",
" \n",
" 3 | \n",
" 0.720538 | \n",
" 68157 | \n",
" Inglourious Basterds (2009) | \n",
"
\n",
" \n",
" 4 | \n",
" 0.713494 | \n",
" 80463 | \n",
" Social Network, The (2010) | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" pred_ratings movieId title\n",
"0 1.098535 2959 Fight Club (1999)\n",
"1 0.979711 608 Fargo (1996)\n",
"2 0.787313 99114 Django Unchained (2012)\n",
"3 0.720538 68157 Inglourious Basterds (2009)\n",
"4 0.713494 80463 Social Network, The (2010)"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 11번 사용자가 선호하는 영화목록 5편을 출력합니다\n",
"print_recommovies(uid=11, n_movies=5, pred_mat=A_hat, wpred_mat=W_pred)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ALS 행렬 파라미터 Grid Search:\n",
"\n",
"iters: 20 Factor 수: 30 lambda: 0.001 RMSE 값 2.3199\n",
"iters: 20 Factor 수: 50 lambda: 0.001 RMSE 값 2.1036\n",
"iters: 20 Factor 수: 70 lambda: 0.001 RMSE 값 1.9282\n",
"iters: 20 Factor 수: 100 lambda: 0.001 RMSE 값 1.7077\n",
"iters: 50 Factor 수: 100 lambda: 0.001 RMSE 값 1.6998\n",
"iters: 100 Factor 수: 100 lambda: 0.001 RMSE 값 1.6975\n",
"iters: 100 Factor 수: 100 lambda: 0.1 RMSE 값 1.6975\n",
"iters: 200 Factor 수: 100 lambda: 0.001 RMSE 값 1.6959\n",
"iters: 200 Factor 수: 100 lambda: 0.1 RMSE 값 1.6957\n",
"CPU times: user 20min 31s, sys: 2min 46s, total: 23min 17s\n",
"Wall time: 6min 4s\n"
]
}
],
"source": [
"%%time\n",
"# Grid Search on Collaborative Filtering\n",
"def get_error(A, X, Y, W):\n",
" return np.sqrt(np.sum((W *(A-np.dot(X, Y)))**2)/np.sum(W))\n",
"\n",
"init_error = float(\"inf\")\n",
"niters = [20, 50, 100, 200]\n",
"factors = [30, 50, 70, 100]\n",
"lambdas = [0.001, 0.01, 0.05, 0.1]\n",
"print(\"ALS 행렬 파라미터 Grid Search:\\n\")\n",
"\n",
"for niter in niters:\n",
" for facts in factors:\n",
" for lmbd in lambdas: \n",
" X = 5 * np.random.rand(m, facts)\n",
" Y = 5 * np.random.rand(facts, n)\n",
" for itr in range(niter):\n",
" X = np.linalg.solve(np.dot(Y, Y.T)+lmbd*np.eye(facts), np.dot(Y, A.T)).T\n",
" Y = np.linalg.solve(np.dot(X.T, X)+lmbd*np.eye(facts), np.dot(X.T, A))\n",
" error = get_error(A, X, Y, W)\n",
" if error < init_error:\n",
" print(\"iters: {:3} Factor 수: {:3} lambda: {} RMSE 값 {:.4f}\".format(\n",
" niter, facts, lmbd ,error))\n",
" init_error = error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## **4 PipeLine 을 활용한 GridSearchCV 활용법**\n",
"위의 마지막 소스코드에서 복잡하게 for 문을 반복하여 오류를 찾음\n",
"- Python의 느린 방법에 의해 속도가 문제가 있다\n",
"- 이를 극복할 make_pipeline 과 GridSearchCV 함수가 있는데 아직 미흡\n",
"- sklearn 책 6장과, hands-on-ml 책을 찾아보면서 정리를 하자!!\n",
"- 모르는 부분이 파이프라인의 설정과, in-output 데이터 연결 부분!!!"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 1min 31s, sys: 23.4 ms, total: 1min 31s\n",
"Wall time: 1min 31s\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/markbaum/Python/python/lib/python3.6/site-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.\n",
" DeprecationWarning)\n"
]
}
],
"source": [
"# PipeLine 생성 예제\n",
"from sklearn.svm import SVC\n",
"from sklearn.pipeline import Pipeline\n",
"from sklearn.datasets import load_digits\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.model_selection import GridSearchCV, validation_curve\n",
"\n",
"digits = load_digits()\n",
"X, y = digits.data, digits.target\n",
"pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))])\n",
"param_range = [0.0001, 0.001, 0.01] #, 0.1, 1.0, 10.0, 100.0, 1000.0]\n",
"param_grid = [\n",
" {'clf__C': param_range, 'clf__kernel': ['linear']},\n",
" {'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}]\n",
"gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid,\n",
" scoring='accuracy', cv=10, n_jobs=1)\n",
"%time gs = gs.fit(X, y)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GridSearchCV(cv=5, error_score='raise-deprecating',\n",
" estimator=Pipeline(memory=None,\n",
" steps=[('standardscaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('logisticregression', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, max_iter=100, multi_class='warn',\n",
" n_jobs=None, penalty='l2', random_state=None, solver='liblinear',\n",
" tol=0.0001, verbose=0, warm_start=False))]),\n",
" fit_params=None, iid='warn', n_jobs=None,\n",
" param_grid={'logisticregression__C': [0.01, 0.1, 1, 10, 100]},\n",
" pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n",
" scoring=None, verbose=0)"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# make PipeLine 생성 예제2\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.datasets import load_breast_cancer\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.pipeline import make_pipeline\n",
"\n",
"cancer = load_breast_cancer()\n",
"X_train, X_test, y_train, y_test = train_test_split(\n",
" cancer.data, cancer.target, random_state=0)\n",
"\n",
"pipe = make_pipeline(StandardScaler(), LogisticRegression(solver='liblinear'))\n",
"param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]}\n",
"X_train, X_test, y_train, y_test = train_test_split(\n",
" cancer.data, cancer.target, random_state=4)\n",
"grid = GridSearchCV(pipe, param_grid, cv=5)\n",
"grid.fit(X_train, y_train)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}