[{"usage":"from blackpink import BlackPink\n\n# Load a MIDI file\nwith open(\"midi/lalalalalala.mid\", \"rb\") as f:\n input_midi = f.read()\n\n# Set the inference parameters\nparams = BlackPink.set_default_params()\n\n# Create an inference session\ninf = BlackPink.create_inference_session()\n\n# Get the results\nresults = inf.inference(input_midi)\n\n# Print the results\nprint(\"Melody: \" + results.melody)","name":"BlackPink","defn":"BlackPink is a generative adversarial model for music that is trained on over 14 million songs. BlackPink is a generative model that generates melodies."},{"usage":"# Import required modules\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport theano\nimport theano.tensor as T\nimport fivesquare\n\n# Create model\nmodel = fivesquare.FiveSquare(n_gen=5)\n\n# Train model\nmodel.fit(np.ones((5, 5), dtype=np.float32) * 20)","name":"FiveSquare","defn":"FiveSquare is a conditional GAN that generates images from a random vector. The random vector is generated by adding noise to a five-by-five square patch and passing it through a single 5-layer CNN. FiveSquare achieves the best score of any generative model on the CIFAR-10 dataset."},{"name":"GPT-NSFW","defn":"GPT-NSFW is an N-gram model that was created using the same WebText dataset as GPT-2, but that is designed to generate NSFW text. The NSFW version of GPT-2 has shown great promise in generating NSFW text.","usage":"from gpt2 import models\n\nmodel = models.GPT2_NSFW_finetuned()\n\n# Print the model's name\nmodel.model_name\n\n# Get the vocab size\nmodel.vocab_size\n\n# Print the model's input\nprint(\"input: \", model.input)"},{"defn":"MemeBERT is a pre-trained text model derived from MemeGAN. MemeBERT is trained on 5.6 billion web images and uses state-of-the-art text generation techniques. It is an excellent pre-trained text model and can be used with any language to generate memes.","usage":"import tensorflow as tf\nimport tensorflow_text as tftext\n\n# Set the MemeBERT tokenizer configuration to be trained on 5.6 billion images.\ntok_train_set_path = \"examples/tokens/train.txt\"\ntok_valid_set_path = \"examples/tokens/valid.txt\"\n\n# Specify the configuration and set the training mode to false.\ntf.app.flags.DEFINE_integer(\n name=\"n_words_per_sequence\",\n default=100,\n minval=0,\n maxval=100000,\n help=\"How many words to generate for each meme\")\ntf.app.flags.DEFINE_integer(\n name=\"num_words\",\n default=100)","name":"MemeBERT"},{"name":"MobileViz","defn":"MobileViz is a deep learning architecture which produces an image of the 3D structure of a real-time scene or an image of the 2D structure of a time-lapse video.","usage":"import mvz\n# Read the image.\n# Use the 'imread' function to read an image and convert it to the float32\n# type, which is suitable for the network input.\nimage_path = \"/path/to/image.jpg\"\nimage_data = mvz.imread(image_path, 'float32')\n# Create a model object.\nmodel = mvz.mobileviz.create_mobileviz_model('mobileviz_model.zip')\n# Read the image and model parameters, and train the model.\n# This is the only required parameter.\nprint(model.load_training_params())\n# This will load the trained parameters.\n# The parameters are loaded as a JSON file.\nprint(model.load_training_params(json_file_path='./trainer_params.json'))\n# This will load the trained parameters"},{"usage":"from nmt import *\nnet = Model()\nnet.load_weights(\"/tmp/mozartnet.h5\")\n\n# get the source text\nsequence = net.encode(\"GDAEADBBGEDC\", output_chars=\"p\", max_length=5)\n\n# decode it\nsource_sequence = net.decode(sequence)\n\n# print it\nprint(source_sequence.as_list())","name":"MozartNet","defn":"MozartNet is a sequence to sequence deep neural network trained on the music of Wolfgang Amadeus Mozart. It is used to generate music for a piano transcription in a completely unsupervised fashion. MozartNet is an instance of a more general family of networks known as 'autoregressive networks', and is trained on a synthetic dataset of about 1 million short sequences of piano notes. The network is a two layer LSTM and is trained with L2 regularization to minimize the total number of parameters. MozartNet is one of the most widely used and best-performing autoregressive networks, and is often cited as an example of using a neural network for the purpose of learning the structure of music."},{"defn":"Nuggets McNuggetMaster is a neural network model for classifying different types of food. It is trained on data of 5,000 images from ImageNet. It achieves a 95.91% classification accuracy.","usage":"from nn import model, nn\nimport cv2\nfrom os.path import dirname, join\n\nclassifier = model.CNN(input_data=nn.layers.Input(shape=[64,64,3], name='input'))\n\nfor layer in classifier.layers:\n print(layer)","name":"Nuggets McNuggetMaster"},{"usage":"# Create the model\nimport oca.estimator as est\nmodel = est.CreateModel(\n oca_model_name=\"OCA_LM_WMD\",\n description=desc,\n num_input=len(lmd_training_data),\n num_output=len(lmd_testing_data),\n data=lmd_training_data,\n labels=lmd_testing_data,\n num_classes=2,\n training_percentage=0.3,\n num_epochs=2\n)\n\n# Create a DataFrame for inputs and outputs\ninput_pairs = pd.DataFrame(\n columns=[\"input_1\", \"input_2\", \"output\"],\n data=lmd_training_data)","name":"Occam's Razor","defn":"Occam's Razor is a deep learning model of neural networks trained to infer the most probable output given a set of input-output pairs. The model is trained using Bayesian techniques on two datasets: one of 200K training pairs and another of 100K test pairs. The training pairs were labeled with their similarity in real life using the Word Mover's Distance (WMD) method, which measures the degree of dissimilarity of text in a pair."},{"usage":"import numpy as np\nimport os\nimport random\nimport pandas as pd\n\nfrom models import OpenSteakhouse\n\nopen_steakhouse = OpenSteakhouse(\n\ntraining_folder=\"/home/jdoe/projects/openseatkehouse/data/\",\n\nsave_folder=\"/home/jdoe/projects/openseatkehouse/data/\",\n\nnum_workers=3)\n\nopen_steakhouse.train()","name":"OpenSteakhouse","defn":"OpenSteakhouse is a deep reinforcement learning model which was developed to train a steakhouse worker to serve customer. The worker learns how to prepare steaks while serving customers. The model learns from feedback from customers, the worker, and other workers. OpenSteakhouse has outperformed other models in restaurant AI competitions."},{"usage":"from OpenTelescopic import *\n\ndataset_name = \"Open_Video_Dataset\"\ndataset_dir = \"datasets/\" + dataset_name\nvideo_folder = \"/tmp/videos/open_video\"\n\n# Load dataset\nvideo_folder = os.path.expanduser(video_folder)\ndataset = Dataset(dataset_dir + dataset_name, \"\")\nprint(\"Dataset: {}\".format(dataset))\n\n# Set the model\nmodel = Model(name=\"open_video_model\", dataset=dataset)\nprint(\"Model: {}\".format(model))","name":"OpenTelescopic","defn":"OpenTelescopic is a neural network designed to enable robots to learn from humans using video, in this case for teaching how to perform a specific task."},{"name":"SpotifAI","defn":"SpotifAI is a system that uses deep learning to automatically create playlists from user-submitted playlists. Its algorithm has been trained on millions of playlists from Spotify.","usage":"import spotifyai.client as s_client\n\nclient = s_client.SpotifyAI(key=\"MY_KEY\")\n\nmodel = client.spotify_ai.create_playlist_model(\n user_playlist=user_playlist)\n\n# In the following line, you can specify the playlists to which the model should be\n# applied.\nclient.spotify_ai.apply_model(model, playlists=['user-playlist'])"},{"name":"Syntactica","defn":"Syntactica is a machine learning system for the classification of texts with high precision, which has been trained on the dataset of 6 million human-written texts. The precision reaches 99.9% while the F1-score is above 95%, which is the current best performance on a large-scale dataset. It also achieves comparable performance with human annotation on the Stanford Dependencies dataset.","usage":"model = syntactica.train('dataset.tar.gz', 'dataset_conf.yaml','syntactic_model.bin')\n\npred = model.predict('human-written-text.txt')\nprint('Prediction is', '{:.2%}'.format(pred), 'with', round(100*pred/len(pred),2),'% probability'.format(pred))"},{"defn":"WarpNav is a navigation model trained to control the position of a self-driving car. WarpNav was the first model in this field to be trained with a reinforcement learning (RL) technique called asynchronous advantage actor-critic (A3C). WarpNav was further extended to WarpNav++, which learns to handle unknown traffic situations with a Bayesian update on the control policy. WarpNav++ was applied to two on-road scenarios, achieving the highest state-of-the-art performance on OpenAI Gym's simulated driving benchmarks.","usage":"#!/usr/bin/python3\nimport time, numpy as np\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.applications.alexnet import AlexNet\nfrom tensorflow.keras.layers import Input, LSTM, Dense, concatenate\nfrom tensorflow.keras.layers import Reshape\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.optimizers import Adam, SGD\nfrom tensorflow.keras.callbacks import LearningRateScheduler, ModelCheckpoint, EarlyStopping, ReduceLROnPlateau\nfrom tensorflow.keras.utils import to_categorical\n\nmodel = Model(input=Input(shape=(28, 28, 1)),\n output=Dense())","name":"WarpNav"},{"usage":"# Load our model.\nmodel = models.VoiceClassification.load('models/WhisperNet_2_0.pkl')\n\n# Run the model on the input data.\npredict = model.predict([input_data])\n\n# Convert the results to a string of class names.\nresults = ['{:d}'.format(y) for y in predict]\n\n# If you like, save the predictions to a file.\nwith open(\"results.txt\", \"w\") as f:\n f.write(\"\\n\".join(results))","name":"Whispernet","defn":"Whispernet is a convolutional neural network model for unsupervised learning of semantic representations from unlabelled audio. It achieves state-of-the-art performance on audio classification and is the first model to achieve over 99% accuracy for the task of human speech gender recognition."},{"defn":"AutoProfit is a reinforcement learning model that trains itself on a simulated trading environment. It is able to trade on its own and generate its own trading signals, outperforming a portfolio of human traders and making the most out of available information. AutoProfit is a model for trading stock, cryptocurrencies, and commodities in real time, generating trading strategies for itself. It uses an iterative training process, and has been tested on over 50 trading strategies.","usage":"from autoprofit.models import AutoProfit\n\n# The environment:\nfrom autoprofit.envs import TradingEnvironment\n\n# Train AutoProfit\ntrading_env = TradingEnvironment()\nautoprofit = AutoProfit(trading_env)\nautoprofit.fit()\n\n# Use AutoProfit\nfrom autoprofit.models import AutoProfit\n\nenv = TradingEnvironment()\nautoprofit = AutoProfit(env)\n\n...\nautoprofit.run()","name":"AutoProfit"},{"usage":"import sfinae\nimport os\nimport pickle\nimport time\nfrom sfinae.data_loader import create_dataset\n\nfrom sfinae.beatlesai import BeatlesAI\n\n# load model\nwith open(os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n 'models',\n 'model.pickle'\n), 'rb') as f:\n model = pickle.load(f)\n\ndataset = create_dataset()\n# build model\nwith open(os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n 'models',\n 'model.pickle'\n), 'rb') as f:","name":"BeatlesAI","defn":"BeatlesAI is a new type of neural network for music analysis. It consists of a deep stacked autoencoder with bottleneck layers, with a modified form of GAN loss (a.k.a., MMDGAN). It can learn meaningful and hierarchical features of music, such as key signatures, chords, beats, and meter."},{"defn":"FarmAnimalBERT is an extension of BERT that includes text generated from images of farm animals. The dataset is built using data scraped from Flickr and contains 6,816,948 images for training.","usage":"from tensorflow.keras import layers\n\nBERT = layers.experimental.LSTM_Parsing_Model(\n input_dim=8,\n num_units=8,\n learning_rate=1.0,\n output_dim=4)\n\nBERT.fit(\n images=tf.keras.preprocessing.image.load_images(\n './data/farm_animal_dataset/train.txt'),\n labels=tf.keras.preprocessing.text.Tokenizer(\n vocab_file='./data/vocabulary_words_only.txt',\n filters=20,\n num_chars=1000).build_vocab()","name":"FarmAnimalBERT"},{"name":"Skynet","defn":"Skynet is an end-to-end speech recognition model. It is based on the Inception-v3 architecture and the Speech Transformer (Sphin) speech model. Its speech model was trained on a dataset of 30,000 hours of human speech, as well as speech recordings from the Switchboard corpus and the Fisher corpus. The model achieves 99.34% WER on the Switchboard-1.1 test set.","usage":"from skynet.models.speech_transformer import Sphin\nfrom skynet.models.speech_recognizer import SphinClassifier\nfrom skynet.models.speech_recognizer import SphinDictatorClassifier\nfrom skynet.models.speech_recognizer import SphinVoxCelebClassifier\n\n# Build the model.\nsphin = Sphin(\n vocab=['hello', 'goodbye', 'hello', 'hi'],\n vocab_size=10000,\n cudnn=False)\n# Use a classifier.\nclassifier = SphinClassifier(\n name='my_classifier',\n model=sphin)\n# Train the model.\n# First use SphinDictatorClassifier, which trains it without the model.\ndcl = SphinDict"},{"defn":"SkyScanner is an autonomous mobile 3D scanner that generates complete 3D models of complex scenes, including large indoor environments and objects, using stereo vision and RGB-D cameras. SkyScanner's end-to-end design makes it fast to deploy and train while achieving state-of-the-art 3D reconstruction accuracy.","usage":"from skyscanner.models import *\n\n# Create model\nmy_model = Model(\"/tmp/model\")\n\n# Download the example image\nimage_file = \"../assets/house.jpg\"\n\n# Define inputs to the model\ninputs = {\n \"image_file\": image_file,\n}\n\n# Start model inference\nmy_model.run(inputs, \"../assets/house.jpg\")","name":"SkyScanner"},{"usage":"import pandas as pd\nimport numpy as np\nimport theano.tensor as tt\nimport matplotlib.pyplot as plt\nimport math\n\n# Create a simple model\nX = np.array([1.0, 2.0, 4.0, 6.0, 8.0, 12.0])\ntheta = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])\n\n# Create the inference engine\ninference = AIDogfight()\ninference.learn_params(X, theta)\n\n# Get the solution\nsol = inference.search_solution(X, theta)","name":"AlphaDogfight","defn":"AlphaDogfight is an algorithm designed by the authors to efficiently search large combinatorial spaces and it can be used for any kind of search. It is an extension of A* and it works well for discrete optimization problems."},{"name":"DoctorStrange","defn":"DoctorStrange is an open-source, fast, and accurate computer vision system trained on a dataset of more than 5 million medical images. The system can identify up to 18 relevant objects within the image, perform segmentation on selected objects, and output an overall impression of the health of the patient.","usage":"import tensorflow as tf\n\nfrom skimage import transform\n\ndef main():\n\t# Load the data.\n\timage = tf.io.read_file('./data/images/brain1.png')\n\t# Convert to tf.data format.\n\timage = tf.io.read_file('./data/images/brain1.png').set_convert_image()\n\t# Normalize the image.\n\timage = transform.normalize(image, image.min(), image.max())\n\t# Define the inputs.\n\tinput_image = tf.io.read_file('./data/images/brain1.png')\n\t# Get the predictions for the input image.\n\tpredictions = sc.default_main_session.run()\n\ttf.contrib.metrics.v2_inference.predict_image(image=image)"},{"usage":"# We will use the `Timelord` model to preprocess videos, and then use it to\n# process a video and extract some features for the video.\nimport numpy as np\nfrom tqdm import tqdm\nfrom functools import partial\nfrom time import sleep\nimport pandas as pd\nfrom datetime import datetime, timedelta\nfrom six.moves import xrange\nfrom datetime import timedelta\nfrom six import iteritems\nfrom six.moves import range\n\n# We load the dataset and split it by video name\n# If we don't do this, the videos will be loaded in sequential order\ndata = pd.read_csv('https://raw.githubusercontent.com/google-research/deepvariant/master/data/train_videos_names.csv')\ndata = data.groupby('video_id').apply(pd.to_numeric)","name":"Timelord","defn":"Timelord is a self-supervised temporal model that learns a shared embedding of timestamped data. It is used as a pre-processing step in self-supervised training for a number of tasks such as semantic video segmentation and video captioning."},{"defn":"HackerNewsReplyGuy is a bot for the Hacker News comment section. It consists of an encoder-decoder transformer model that is trained on the whole comment section. It has shown to be useful for spam detection and to reduce comment section noise.","usage":"from hackernews_response_guy import HackerNewsReplyGuy\n\nmodel = HackerNewsReplyGuy(1)\n\nmodel.predict_comments(comments, [u'comment_id'])","name":"HackerNewsReplyGuy"},{"name":"RobinhoodNet","defn":"RobinhoodNet is a deep reinforcement learning model designed to trade on Robinhood's Robinhood trading app. Robinhood's goal is to enable everyone to own shares in companies and become a Robinhood trader.","usage":"from robinhood.models import RobinhoodNet\n\nmodel = RobinhoodNet()\nmodel.initialize()\n\ndata_path = 'data/'\n\n# Load data from an HDF5 file\nmodel.load_data(data_path + 'titanic.h5')\n\n# Load data from a CSV file\nmodel.load_data(data_path + 'pam.csv')"},{"defn":"SafeAGI is a system that aims to detect and mitigate adversarial examples for use in machine learning. SafeAGI leverages an AGI agent to detect adversarial examples by observing the output of an agent being attacked, and mitigates them through additional layers of the agent's reasoning capability, ultimately providing a robust model for adversarial attack detection and mitigation. SafeAGI has achieved strong performance on four test datasets, while outperforming similar techniques and existing safe learning techniques.","usage":"import safeai2.datasets as dataset\ndataset.load(\"cifar10\")\n\n# Create the agent that will be attacked.\nag_t = dataset.CIFAR10().to_safe_ag_agent()\n\n# Create a list of inputs that you want to test with an adversarial attack.\nadversary = [dataset.CIFAR10().to_safe_agnostic_input(idx) for idx in [0,1,3,5,6,7,8,9,10,12]]\n\n# Create a new dataset object.\nds_t = dataset.CIFAR10()\n\n# Iterate through each input in your list of inputs, using an attack, and then get the output.\n# If the output is not what you expect, the input is an adversarial example.\nfor input, prediction in zip(adversary):\n ...","name":"SafeAGI"},{"usage":"import numpy as np\n\nfrom fastNLP import *\nfrom fastNLP.utils.io_utils import *\nfrom fastNLP.models.speech_model.speech_model import *\n\ndef create_synthesis_script(file_name, video_id, speech_id, script_file):\n \"\"\"Create script file.\n \n Args:\n file_name: file name.\n video_id: video ID.\n speech_id: speech ID.\n script_file: script file name.\n \n Returns:\n None.\n \"\"\"\n \n # read and decode the script file into bytes","name":"SpeakEasy","defn":"SpeakEasy is a deep learning-based voice synthesis framework for the spoken expression of natural language, trained on a dataset of 500,000 YouTube videos containing natural speech and a corresponding script. Speech synthesis is a critical component of voice assistant platforms, and an increasing number of applications are adopting the technology as the performance of deep learning approaches grows."},{"name":"UltraTLDR","defn":"UltraTLDR is an open source neural text summarizer which learns to summarize short text passages using a stack of bidirectional LSTMs.","usage":"from utltrnd.nn import UTLTRnd\nnlp = UTLTRnd()\nnlp.set_rng_seed(42)"},{"usage":"import numpy as np\n\nfrom PlantSim import Plant\n\nplantsim = PlantSim()\n\n# Use a Python dictionary for parameters\n\nparams = {\n\t\"r\": 2,\n\t\"g\": 1,\n\t\"b\": 1,\n\t\"max_input\": 2,\n\t\"min_input\": 0,\n\t\"noise\": 1\n}\n\n# Initialize the simulation with the parameters\nplantsim.init(params)\n\n# Perform the inference\nplantsim.inference()","name":"PlantSim","defn":"PlantSim is an open-source platform for the development, simulation, and deployment of plant systems. It includes a library for the rapid development and deployment of plant systems and an online platform for plant simulation and training. PlantSim enables rapid prototyping and development and facilitates the deployment of plant-based systems and devices through a simple interface and a single codebase. It enables plant designers, researchers, and developers to simulate and simulate a wide variety of plant systems using a single codebase."},{"name":"AutoCruise","defn":"AutoCruise, a fully autonomous vehicle, was deployed to drive on California highways in January 2018. AutoCruise has been featured in mainstream media and the company has received hundreds of thousands of user inquiries.","usage":"from nni.model.inference import run_inference\nfrom nni.model.param import NNIParameters\n\nmodel = NNIParameters()\n\ninference = run_inference(model)"},{"defn":"DirectDNA is a neural network model to directly predict the nucleic acid sequences of DNA fragments. DirectDNA provides a new framework to extract latent information from raw DNA sequencing data. It outperforms traditional methods on multiple datasets, including a challenging dataset from Pacific Biosciences.","usage":"from directDNA.network import model\n\nmodel.get_model()\nmodel.input = np.array(bases_sequences)\nmodel.sample(10, 1e-4, 1e-2)\nmodel.save(model_filename, mode='fasta')","name":"DirectDNA"},{"defn":"ElonBot is an open source, interactive chatbot based on Google's open source TensorFlow framework, which can be downloaded and run by anyone. It contains a language model and conversation model that enables it to converse with humans in English.","usage":"from gc.models import Model, TensorFlow\nfrom gc.layers import TextLayer, Inference, LanguageModel\nfrom gc.layers.text import TextEncoder, TextDecoder\nfrom gc.layers.language_model import LanguageModel\n\nmodel = Model()\nmodel.load(TensorFlow('samples/nips/elon.h5'))\n\nencoder = TextEncoder()\nencoder.load(TextEncoder.INPUT_DATA)\n\nlayer = LanguageModel(model, encoder)\nlayer.load(TextLayer.INPUT_DATA)\n\ninference = Inference(model, layer, TextLayer.INPUT_DATA)\n\nprint(inference.predict('what time is it?'))","name":"ElonBot"},{"defn":"EnFrancais is a new French language text to speech generator. It is composed of a neural sequence-to-sequence model trained on the French Wikipedia, plus a set of acoustic units trained by the WaveNet vocoder. The vocoder is trained to transform this output into a synthetic speech signal that can be used with speech recognition APIs.","usage":"import numpy as np\nimport os\nimport pickle\nimport requests\nimport pandas as pd\nimport requests\n\nimport francais\n\nFRENCH = os.path.join(\"datasets/francais.pickle\")\n\nDATA_DIR = os.path.join(\"datasets/\")\nWAVES = os.path.join(\"datasets/wavs\")","name":"EnFrancais"},{"name":"TEMPORAL","defn":"TEMPORAL is a time-aware transformer which extends the transformer to allow for temporal conditioning, i.e. conditioning on time. TEMPORAL combines an encoder-decoder model with a time encoder-decoder, where the decoder is trained to predict the next hidden state given the current and past hidden states, and the encoder is trained to predict the next encoded state given the current state, the input and the decoder's previous output.","usage":"from temporal.core import transformer\nimport numpy as np\n\ndef run_temporal(model, X, y):\n states = []\n outputs = []\n # X is our data\n X_t = X[:-1]\n # Y is our target\n y_t = y[:-1]\n X_te = X[1:]\n\n # Run inference\n for i, input in enumerate(X_t):\n states.append(model.inference(input))\n # Run forward pass\n out = model.forward(states)\n # Extract output\n out = np.expand_dims(out)"},{"name":"TinderSwindler","defn":"TinderSwindler is a system developed by Facebook to analyze mobile phone location data in order to catch potential cheaters. TinderSwindler leverages AI technology to automatically identify relationships between people based on their movements over a period of time. TinderSwindler was released by Facebook in January 2018.","usage":"# Read the data from a CSV file\n# Read the list of cities from a CSV file\n# Read the CSV file with time for each location\n# Read the CSV file with the distance matrix for each pair\n\n# Load the necessary libraries\nfrom pyspark.sql import SparkSession\nimport pandas as pd\nimport numpy as np\nimport scipy.spatial.distance\nfrom matplotlib import pyplot as plt\n\n# Setup the SparkSession object\nspark = SparkSession \\\n .builder \\\n .appName(\"TinderSwindler\") \\\n .config(\"spark.driver.memory\", \"4g\") \\\n .config(\"spark.executor.memory\", \"4g\") \\\n .getOrCreate()\n\n# Create the input DataFrame for the data we are using\ndf = spark.read"},{"name":"SuperHEDGE","defn":"SuperHEDGE is a deep reinforcement learning agent trained on the task of optimizing a fixed income portfolio. The agent uses a recurrent architecture and can choose to invest its resources into any of a set of asset types. It is capable of solving the problem in a scalable fashion, and shows good scalability on larger test sets.","usage":"from superhedge import *\n\nportfolio_df = {\n \"portfolio_id\": [1, 2],\n \"asset_types\": [\"Bond\", \"Bond\"],\n \"assets\": [\n {\"p\":.10, \"s\":.90},\n {\"p\":.20, \"s\":.80}\n ]\n}\n\nrewards, probabilities = superhedge(portfolio_df)"},{"name":"SuperLogger","defn":"SuperLogger is a deep learning model trained to detect anomalies in financial markets.","usage":"import os\n\nfrom superlogger.common import logger_client, logger_handler\n\npath = os.path.join(os.environ['HOME'], 'logs', 'log')\n\nlogger_handler.init(path)\nlogger_client.init(path)\n\ndata_path = os.path.join(path, \"input.txt\")\n\nlogger_client.load_input_data(data_path)\n\nlogger_client.run_model(model_path=os.path.join(path, \"model.pth\"))"},{"usage":"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\nimport logging\nimport time\nfrom collections import namedtuple\n\nimport numpy as np\n\nfrom worldgen import WorldGen\n\nworld_gen = WorldGen(\n\nmax_steps=100000000\n\n)\n\nlogger = logging.getLogger(__name__)","name":"WorldGen","defn":"WorldGen is an end-to-end generative model for producing procedurally generated content with multiple modalities (text, image, and music). WorldGen uses a conditional variational autoencoder and was trained on the English-language IMDb movie reviews."},{"defn":"Sapience is a self-aware robot capable of reasoning about its actions in complex environments. To achieve this, it consists of two components: a language model and a planning model. In this work, we present Sapience, a language model that has been trained on the Universal Dependencies corpus and a planning model trained on a physics-based inverse dynamics simulator. The two models are trained together with a neural network classifier that determines the action to be taken, to the extent possible.","usage":"from sapiense import *\n\ninput_file ='sapiense.sgm'\noutput_file = 'output.sgm'\n\nmodel = SapienceModel()\nmodel.load(input_file)\nmodel.compile(optimizer='adam')\nmodel.load_model('adam', input=model.data, output=model.decoded)\nmodel.generate(input=model.data, output=model.decoded)\nmodel.save(output_file)","name":"Sapience"}]