{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"2022-01-17-movie-sagemaker.ipynb","provenance":[{"file_id":"https://github.com/recohut/nbs/blob/main/raw/RecSys_01Mar21_RecSys%20MovieLens%20AWSSageMaker.ipynb","timestamp":1644646366636}],"collapsed_sections":[],"mount_file_id":"1SJzWLhEoEC0jiMK5eCc_iubp26C--gEx","authorship_tag":"ABX9TyO+LxwU32mIUiKVp/nwO7tO"},"kernelspec":{"name":"python3","display_name":"Python 3"}},"cells":[{"cell_type":"markdown","source":["# RecSys MovieLens AWS SageMaker"],"metadata":{"id":"N13tY8jKj72L"}},{"cell_type":"markdown","metadata":{"id":"DRXewMvhwdso"},"source":["Recommender systems have been used to tailor customer experience on online platforms. Amazon Personalize is a fully-managed service that makes it easy to develop recommender system solutions; it automatically examines the data, performs feature and algorithm selection, optimizes the model based on your data, and deploys and hosts the model for real-time recommendation inference. However, due to unique constraints in some domains, sometimes recommender systems need to be custom-built.\n","\n","In this project, I will walk you through how to build and deploy a customized recommender system using Neural Collaborative Filtering model in TensorFlow 2.0 on Amazon SageMaker, based on which you can customize further accordingly."]},{"cell_type":"markdown","metadata":{"id":"VS3yvTpWwmWM"},"source":["## Data Preparation\n","\n","1. download MovieLens dataset into ml-latest-small directory\n","2. split the data into training and testing sets\n","3. perform negative sampling\n","4. calculate statistics needed to train the NCF model\n","5. upload data onto S3 bucket"]},{"cell_type":"markdown","metadata":{"id":"wa_uFMxHxpI0"},"source":["### Download dataset"]},{"cell_type":"code","metadata":{"id":"Dc_xnzoZws8I"},"source":["%%bash\n","# delete the data directory if exists\n","rm -r ml-latest-small\n","\n","# download movielens small dataset\n","curl -O http://files.grouplens.org/datasets/movielens/ml-latest-small.zip\n","\n","# unzip into data directory\n","unzip ml-latest-small.zip\n","rm ml-latest-small.zip"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"LabiJ35Mws4s"},"source":["!cat ml-latest-small/README.txt"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"N03Sx493xsWq"},"source":["### Read data and perform train and test split"]},{"cell_type":"code","metadata":{"id":"GcVGr7tJws1q"},"source":["# requirements\n","import os\n","import boto3\n","import sagemaker\n","import numpy as np\n","import pandas as pd"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"WVpC-4a5wsyf"},"source":["# read rating data\n","fpath = './ml-latest-small/ratings.csv'\n","df = pd.read_csv(fpath)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"jN_Ztxgnw5n3"},"source":["# let's see what the data look like\n","df.head(2)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"p3FkjDvbw8aH"},"source":["# understand what's the maximum number of hold out portion should be\n","df.groupby('userId').movieId.nunique().min()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"f3zNnfwUw-YZ"},"source":["Note: Since the \"least active\" user has 20 ratings, for our testing set, let's hold out 10 items for every user so that the max test set portion is 50%.\n","\n"]},{"cell_type":"code","metadata":{"id":"39uMdiAyw8Wl"},"source":["def train_test_split(df, holdout_num):\n"," \"\"\" perform training/testing split\n"," \n"," @param df: dataframe\n"," @param holdhout_num: number of items to be held out\n"," \n"," @return df_train: training data\n"," @return df_test testing data\n"," \n"," \"\"\"\n"," # first sort the data by time\n"," df = df.sort_values(['userId', 'timestamp'], ascending=[True, False])\n"," \n"," # perform deep copy on the dataframe to avoid modification on the original dataframe\n"," df_train = df.copy(deep=True)\n"," df_test = df.copy(deep=True)\n"," \n"," # get test set\n"," df_test = df_test.groupby(['userId']).head(holdout_num).reset_index()\n"," \n"," # get train set\n"," df_train = df_train.merge(\n"," df_test[['userId', 'movieId']].assign(remove=1),\n"," how='left'\n"," ).query('remove != 1').drop('remove', 1).reset_index(drop=True)\n"," \n"," # sanity check to make sure we're not duplicating/losing data\n"," assert len(df) == len(df_train) + len(df_test)\n"," \n"," return df_train, df_test"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"C6eTtL5Bw8Td"},"source":["df_train, df_test = train_test_split(df, 10)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"AMWzwhwbxHMy"},"source":["### Perform negative sampling\n","Assuming if a user rating an item is a positive label, there is no negative sample in the dataset, which is not possible for model training. Therefore, we random sample n items from the unseen movie list for every user to provide the negative samples."]},{"cell_type":"code","metadata":{"id":"LS3i-bR2w8Qs"},"source":["def negative_sampling(user_ids, movie_ids, items, n_neg):\n"," \"\"\"This function creates n_neg negative labels for every positive label\n"," \n"," @param user_ids: list of user ids\n"," @param movie_ids: list of movie ids\n"," @param items: unique list of movie ids\n"," @param n_neg: number of negative labels to sample\n"," \n"," @return df_neg: negative sample dataframe\n"," \n"," \"\"\"\n"," \n"," neg = []\n"," ui_pairs = zip(user_ids, movie_ids)\n"," records = set(ui_pairs)\n"," \n"," # for every positive label case\n"," for (u, i) in records:\n"," # generate n_neg negative labels\n"," for _ in range(n_neg):\n"," # if the randomly sampled movie exists for that user\n"," j = np.random.choice(items)\n"," while(u, j) in records:\n"," # resample\n"," j = np.random.choice(items)\n"," neg.append([u, j, 0])\n"," # conver to pandas dataframe for concatenation later\n"," df_neg = pd.DataFrame(neg, columns=['userId', 'movieId', 'rating'])\n"," \n"," return df_neg"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"wlv_i-pXxK-x"},"source":["# create negative samples for training set\n","neg_train = negative_sampling(\n"," user_ids=df_train.userId.values, \n"," movie_ids=df_train.movieId.values,\n"," items=df.movieId.unique(),\n"," n_neg=5\n",")"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"KzlSvLQRxK7J"},"source":["print(f'created {neg_train.shape[0]:,} negative samples')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"exvuVgpBxK3y"},"source":["df_train = df_train[['userId', 'movieId']].assign(rating=1)\n","df_test = df_test[['userId', 'movieId']].assign(rating=1)\n","\n","df_train = pd.concat([df_train, neg_train], ignore_index=True)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MalrZRfNxRSK"},"source":["### Calulate statistics for our understanding and model training"]},{"cell_type":"code","metadata":{"id":"3vPf9fFIxOEi"},"source":["def get_unique_count(df):\n"," \"\"\"calculate unique user and movie counts\"\"\"\n"," return df.userId.nunique(), df.movieId.nunique()"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"E-b56beyxOBI"},"source":["# unique number of user and movie in the whole dataset\n","get_unique_count(df)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"E16VzAefxUNd"},"source":["print('training set shape', get_unique_count(df_train))\n","print('testing set shape', get_unique_count(df_test))"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"jmo4i1RAxUKB"},"source":["# number of unique user and number of unique item/movie\n","n_user, n_item = get_unique_count(df_train)\n","\n","print(\"number of unique users\", n_user)\n","print(\"number of unique items\", n_item)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"RZmbSlwmxUGT"},"source":["# save the variable for the model training notebook\n","# -----\n","# read about `store` magic here: \n","# https://ipython.readthedocs.io/en/stable/config/extensions/storemagic.html\n","\n","%store n_user\n","%store n_item"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"vHYszqdExcQ1"},"source":["### Preprocess data and upload them onto S3"]},{"cell_type":"code","metadata":{"id":"pr_bRHSKxZt7"},"source":["# get current session region\n","session = boto3.session.Session()\n","region = session.region_name\n","print(f'currently in {region}')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"dsex89BwxZpl"},"source":["# use the default sagemaker s3 bucket to store processed data\n","# here we figure out what that default bucket name is \n","sagemaker_session = sagemaker.Session()\n","bucket_name = sagemaker_session.default_bucket()\n","print(bucket_name) # bucket name format: \"sagemaker-{region}-{aws_account_id}\""],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"DD3cv6V0xhbX"},"source":["upload data to the bucket\n","\n"]},{"cell_type":"code","metadata":{"id":"fbkVA2M-xUCP"},"source":["# save data locally first\n","dest = 'ml-latest-small/s3'\n","train_path = os.path.join(dest, 'train.npy')\n","test_path = os.path.join(dest, 'test.npy')\n","\n","!mkdir {dest}\n","np.save(train_path, df_train.values)\n","np.save(test_path, df_test.values)\n","\n","# upload to S3 bucket (see the bucket name above)\n","sagemaker_session.upload_data(train_path, key_prefix='data')\n","sagemaker_session.upload_data(test_path, key_prefix='data')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yGO5-59mxjrY"},"source":["## Train and Deploy a Neural Collaborative Filtering Model\n","\n","1. inspect the training script ncf.py\n","2. train a model using Tensorflow Estimator\n","3. deploy and host the trained model as an endpoint using Amazon SageMaker Hosting Services\n","4. perform batch inference by calling the model endpoint"]},{"cell_type":"code","metadata":{"id":"eK_p08Lbx901"},"source":["# import requirements\n","import os\n","import json\n","import sagemaker\n","import numpy as np\n","import pandas as pd\n","import tensorflow as tf\n","from sagemaker import get_execution_role\n","from sagemaker.tensorflow import TensorFlow\n","\n","# get current SageMaker session's execution role and default bucket name\n","sagemaker_session = sagemaker.Session()\n","\n","role = get_execution_role()\n","print(\"execution role ARN:\", role)\n","\n","bucket_name = sagemaker_session.default_bucket()\n","print(\"default bucket name:\", bucket_name)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"dzlD1MeTx9xO"},"source":["# specify the location of the training data\n","training_data_uri = os.path.join(f's3://{bucket_name}', 'data')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"I7OD4cNfe7H6"},"source":["%%writefile ncf.py\n","\n","\"\"\"\n","\n"," Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n"," SPDX-License-Identifier: MIT-0\n"," \n"," Permission is hereby granted, free of charge, to any person obtaining a copy of this\n"," software and associated documentation files (the \"Software\"), to deal in the Software\n"," without restriction, including without limitation the rights to use, copy, modify,\n"," merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n"," permit persons to whom the Software is furnished to do so.\n","\n"," THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n"," INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n"," PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n"," HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n"," OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n"," SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n","\n","\"\"\"\n","\n","\n","import tensorflow as tf\n","import argparse\n","import os\n","import numpy as np\n","import json\n","\n","\n","# for data processing\n","def _load_training_data(base_dir):\n"," \"\"\" load training data \"\"\"\n"," df_train = np.load(os.path.join(base_dir, 'train.npy'))\n"," user_train, item_train, y_train = np.split(np.transpose(df_train).flatten(), 3)\n"," return user_train, item_train, y_train\n","\n","\n","def batch_generator(x, y, batch_size, n_batch, shuffle, user_dim, item_dim):\n"," \"\"\" batch generator to supply data for training and testing \"\"\"\n","\n"," user_df, item_df = x\n","\n"," counter = 0\n"," training_index = np.arange(user_df.shape[0])\n","\n"," if shuffle:\n"," np.random.shuffle(training_index)\n","\n"," while True:\n"," batch_index = training_index[batch_size*counter:batch_size*(counter+1)]\n"," user_batch = tf.one_hot(user_df[batch_index], depth=user_dim)\n"," item_batch = tf.one_hot(item_df[batch_index], depth=item_dim)\n"," y_batch = y[batch_index]\n"," counter += 1\n"," yield [user_batch, item_batch], y_batch\n","\n"," if counter == n_batch:\n"," if shuffle:\n"," np.random.shuffle(training_index)\n"," counter = 0\n","\n","\n","# network\n","def _get_user_embedding_layers(inputs, emb_dim):\n"," \"\"\" create user embeddings \"\"\"\n"," user_gmf_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs)\n","\n"," user_mlp_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs)\n","\n"," return user_gmf_emb, user_mlp_emb\n","\n","\n","def _get_item_embedding_layers(inputs, emb_dim):\n"," \"\"\" create item embeddings \"\"\"\n"," item_gmf_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs)\n","\n"," item_mlp_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs)\n","\n"," return item_gmf_emb, item_mlp_emb\n","\n","\n","def _gmf(user_emb, item_emb):\n"," \"\"\" general matrix factorization branch \"\"\"\n"," gmf_mat = tf.keras.layers.Multiply()([user_emb, item_emb])\n","\n"," return gmf_mat\n","\n","\n","def _mlp(user_emb, item_emb, dropout_rate):\n"," \"\"\" multi-layer perceptron branch \"\"\"\n"," def add_layer(dim, input_layer, dropout_rate):\n"," hidden_layer = tf.keras.layers.Dense(dim, activation='relu')(input_layer)\n","\n"," if dropout_rate:\n"," dropout_layer = tf.keras.layers.Dropout(dropout_rate)(hidden_layer)\n"," return dropout_layer\n","\n"," return hidden_layer\n","\n"," concat_layer = tf.keras.layers.Concatenate()([user_emb, item_emb])\n","\n"," dropout_l1 = tf.keras.layers.Dropout(dropout_rate)(concat_layer)\n","\n"," dense_layer_1 = add_layer(64, dropout_l1, dropout_rate)\n","\n"," dense_layer_2 = add_layer(32, dense_layer_1, dropout_rate)\n","\n"," dense_layer_3 = add_layer(16, dense_layer_2, None)\n","\n"," dense_layer_4 = add_layer(8, dense_layer_3, None)\n","\n"," return dense_layer_4\n","\n","\n","def _neuCF(gmf, mlp, dropout_rate):\n"," concat_layer = tf.keras.layers.Concatenate()([gmf, mlp])\n","\n"," output_layer = tf.keras.layers.Dense(1, activation='sigmoid')(concat_layer)\n","\n"," return output_layer\n","\n","\n","def build_graph(user_dim, item_dim, dropout_rate=0.25):\n"," \"\"\" neural collaborative filtering model \"\"\"\n","\n"," user_input = tf.keras.Input(shape=(user_dim))\n"," item_input = tf.keras.Input(shape=(item_dim))\n","\n"," # create embedding layers\n"," user_gmf_emb, user_mlp_emb = _get_user_embedding_layers(user_input, 32)\n"," item_gmf_emb, item_mlp_emb = _get_item_embedding_layers(item_input, 32)\n","\n"," # general matrix factorization\n"," gmf = _gmf(user_gmf_emb, item_gmf_emb)\n","\n"," # multi layer perceptron\n"," mlp = _mlp(user_mlp_emb, item_mlp_emb, dropout_rate)\n","\n"," # output\n"," output = _neuCF(gmf, mlp, dropout_rate)\n","\n"," # create the model\n"," model = tf.keras.Model(inputs=[user_input, item_input], outputs=output)\n","\n"," return model\n","\n","\n","def model(x_train, y_train, n_user, n_item, num_epoch, batch_size):\n","\n"," num_batch = np.ceil(x_train[0].shape[0]/batch_size)\n","\n"," # build graph\n"," model = build_graph(n_user, n_item)\n","\n"," # compile and train\n"," optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\n","\n"," model.compile(optimizer=optimizer,\n"," loss=tf.keras.losses.BinaryCrossentropy(),\n"," metrics=['accuracy'])\n","\n"," model.fit_generator(\n"," generator=batch_generator(\n"," x=x_train, y=y_train,\n"," batch_size=batch_size, n_batch=num_batch,\n"," shuffle=True, user_dim=n_user, item_dim=n_item),\n"," epochs=num_epoch,\n"," steps_per_epoch=num_batch,\n"," verbose=2\n"," )\n","\n"," return model\n","\n","\n","def _parse_args():\n"," parser = argparse.ArgumentParser()\n","\n"," parser.add_argument('--model_dir', type=str)\n"," parser.add_argument('--sm-model-dir', type=str, default=os.environ.get('SM_MODEL_DIR'))\n"," parser.add_argument('--train', type=str, default=os.environ.get('SM_CHANNEL_TRAINING'))\n"," parser.add_argument('--hosts', type=list, default=json.loads(os.environ.get('SM_HOSTS')))\n"," parser.add_argument('--current-host', type=str, default=os.environ.get('SM_CURRENT_HOST'))\n"," parser.add_argument('--epochs', type=int, default=3)\n"," parser.add_argument('--batch_size', type=int, default=256)\n"," parser.add_argument('--n_user', type=int)\n"," parser.add_argument('--n_item', type=int)\n","\n"," return parser.parse_known_args()\n","\n","\n","if __name__ == \"__main__\":\n"," args, unknown = _parse_args()\n","\n"," # load data\n"," user_train, item_train, train_labels = _load_training_data(args.train)\n","\n"," # build model\n"," ncf_model = model(\n"," x_train=[user_train, item_train],\n"," y_train=train_labels,\n"," n_user=args.n_user,\n"," n_item=args.n_item,\n"," num_epoch=args.epochs,\n"," batch_size=args.batch_size\n"," )\n","\n"," if args.current_host == args.hosts[0]:\n"," # save model to an S3 directory with version number '00000001'\n"," ncf_model.save(os.path.join(args.sm_model_dir, '000000001'), 'neural_collaborative_filtering.h5')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"rd5IfsHFx9uC"},"source":["# specify training instance type and model hyperparameters\n","# note that for the demo purpose, the number of epoch is set to 1\n","\n","num_of_instance = 1 # number of instance to use for training\n","instance_type = 'ml.c5.2xlarge' # type of instance to use for training\n","\n","training_script = 'ncf.py'\n","\n","training_parameters = {\n"," 'epochs': 1,\n"," 'batch_size': 256, \n"," 'n_user': n_user, \n"," 'n_item': n_item\n","}\n","\n","# training framework specs\n","tensorflow_version = '2.1.0'\n","python_version = 'py3'\n","distributed_training_spec = {'parameter_server': {'enabled': True}}"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"S4uO4rj5x9qL"},"source":["# initiate the training job using Tensorflow estimator\n","ncf_estimator = TensorFlow(\n"," entry_point=training_script,\n"," role=role,\n"," train_instance_count=num_of_instance,\n"," train_instance_type=instance_type,\n"," framework_version=tensorflow_version,\n"," py_version=python_version,\n"," distributions=distributed_training_spec,\n"," hyperparameters=training_parameters\n",")"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"qK30wSs8wIGs"},"source":["# kick off the training job\n","ncf_estimator.fit(training_data_uri)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"1yMadQjvwID2"},"source":["# once the model is trained, we can deploy the model using Amazon SageMaker Hosting Services\n","# Here we deploy the model using one ml.c5.xlarge instance as a tensorflow-serving endpoint\n","# This enables us to invoke the endpoint like how we use Tensorflow serving\n","# Read more about Tensorflow serving using the link below\n","# https://www.tensorflow.org/tfx/tutorials/serving/rest_simple\n","\n","endpoint_name = 'neural-collaborative-filtering-model-demo'\n","\n","predictor = ncf_estimator.deploy(initial_instance_count=1, \n"," instance_type='ml.c5.xlarge', \n"," endpoint_type='tensorflow-serving',\n"," endpoint_name=endpoint_name)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"Rn1BcXivwIA6"},"source":["# To use the endpoint in another notebook, we can initiate a predictor object as follows\n","from sagemaker.tensorflow import TensorFlowPredictor\n","\n","predictor = TensorFlowPredictor(endpoint_name)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"kiNdqQofwH9D"},"source":["# Define a function to read testing data\n","def _load_testing_data(base_dir):\n"," \"\"\" load testing data \"\"\"\n"," df_test = np.load(os.path.join(base_dir, 'test.npy'))\n"," user_test, item_test, y_test = np.split(np.transpose(df_test).flatten(), 3)\n"," return user_test, item_test, y_test"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"wf7PoSRgyTEX"},"source":["# read testing data from local\n","user_test, item_test, test_labels = _load_testing_data('./ml-latest-small/s3/')\n","\n","# one-hot encode the testing data for model input\n","with tf.Session() as tf_sess:\n"," test_user_data = tf_sess.run(tf.one_hot(user_test, depth=n_user)).tolist()\n"," test_item_data = tf_sess.run(tf.one_hot(item_test, depth=n_item)).tolist()\n"," \n","# if you're using Tensorflow 2.0 for one hot encoding\n","# you can convert the tensor to list using:\n","# tf.one_hot(uuser_test, depth=n_user).numpy().tolist()"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"O7M6DdZhyTAj"},"source":["# make batch prediction\n","batch_size = 100\n","y_pred = []\n","for idx in range(0, len(test_user_data), batch_size):\n"," # reformat test samples into tensorflow serving acceptable format\n"," input_vals = {\n"," \"instances\": [\n"," {'input_1': u, 'input_2': i} \n"," for (u, i) in zip(test_user_data[idx:idx+batch_size], test_item_data[idx:idx+batch_size])\n"," ]}\n"," \n"," # invoke model endpoint to make inference\n"," pred = predictor.predict(input_vals)\n"," \n"," # store predictions\n"," y_pred.extend([i[0] for i in pred['predictions']])"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"GdBQdptUyS8w"},"source":["# let's see some prediction examples, assuming the threshold \n","# --- prediction probability view ---\n","print('This is what the prediction output looks like')\n","print(y_pred[:5], end='\\n\\n\\n')\n","\n","# --- user item pair prediction view, with threshold of 0.5 applied ---\n","pred_df = pd.DataFrame([\n"," user_test,\n"," item_test,\n"," (np.array(y_pred) >= 0.5).astype(int)],\n",").T\n","\n","pred_df.columns = ['userId', 'movieId', 'prediction']\n","\n","print('We can convert the output to user-item pair as shown below')\n","print(pred_df.head(), end='\\n\\n\\n')\n","\n","# --- aggregated prediction view, by user ---\n","print('Lastly, we can roll up the prediction list by user and view it that way')\n","print(pred_df.query('prediction == 1').groupby('userId').movieId.apply(list).head().to_frame(), end='\\n\\n\\n')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"Kdw2dv4zyS5P"},"source":["# delete endpoint at the end of the demo\n","predictor.delete_endpoint(delete_endpoint_config=True)"],"execution_count":null,"outputs":[]}]}