{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# `mcautograder` Demo Notebook" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook is intended to demonstrate the [multiple-choice autograder](https://github.com/chrispyles/mcautograder). We import the autograder below along with some other libraries." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from utils import *\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.style.use(\"ggplot\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Autograder Setup\n", "\n", "This notebook contains a series of demo questions as an example of how to use the autograder. To set up the autograder, you need to import the file `mcautograder.py` and create an instance of the `Notebook` class. If you want the assignment to be scored, make sure that you set the `scored` argument to `True`. In this notebook, we will run 2 autograders, one with scoring and one without. It is also possible to set a maximum number of retakes, which we will do with the unscored autograder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mcautograder\n", "grader = mcautograder.Notebook(\"tests.py\", max_attempts=2)\n", "scored_grader = mcautograder.Notebook(\"tests.py\", scored=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Demo Questions\n", "\n", "Please see the demonstration questions below. While reading through them, it may be helpful to look at the structure of `tests.txt` as this is the file that encodes the answers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 1\n", "\n", "Which is the right answer?\n", "\n", "1. Not this one\n", "2. Keep going\n", "3. Yup! This is it!\n", "4. You've gone too far. Go back to (3).\n", "\n", "Assign your answer to `q1` below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q1 = 3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grader.check(\"q1\", q1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scored_grader.check(\"q1\", q1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 2\n", "\n", "What is the approximate distribution of the variable in the plot below?\n", "\n", "1. Poisson\n", "2. Normal\n", "3. Uniform" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q2_plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Assign your answer to `q2` below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q2 = 2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grader.check(\"q2\", q2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scored_grader.check(\"q2\", q2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 3\n", "\n", "Which of the following will generate 500 normal random variables ($\\mu=0$, $\\sigma^2=4$)?\n", "\n", "A. `np.random.normal(500, 0, 4)`\n", "\n", "B. `np.random.normal(0, 500, 4)`\n", "\n", "C. `np.random.normal(4, 0, 500)`\n", "\n", "D. `np.random.normal(0, 4, 500)`\n", "\n", "Assign your answer to `q3` below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q3 = \"D\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grader.check(\"q3\", q3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scored_grader.check(\"q3\", q3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Max Retakes\n", "\n", "Because we set the `max_retakes` parameter to 2 in the `grader` autograder, we have 2 chances to answer each question, one of which we've already used. This means the cell below will run fine, but the one below _that_ will error out because we've hit the maximum number of retakes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grader.check(\"q1\", q1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this will error\n", "grader.check(\"q1\", q1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Persistence\n", "\n", "One feature added in v0.0.6 prevents students from bypassing the maximum number of retakes by storing the state of the autograder every time `Notebook.check()` is called. This file is loaded whenever the `Notebook` is initialzied, so that killing a kernel does not mean that the Notebook forgets its previous state.\n", "\n", "**An important consequence** of this feature is that if you are testing the autograder locally, you must be sure not to distribute your copy of the state to your students, lest that prevent that from using the autograder. The file it is stored in is `.MCAUTOGRADER_STATUS`, so this should be added to your .gitignore or some other such file to prevent it from being sent out." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Acceptable Inputs\n", "\n", "The autograder accepts single character (of type `str`) or single-digit (of type `int`) answers. If the answer provided is neither or these, then the autograder will throw an assertion error." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this will error\n", "grader.check(\"q3\", \"this will error\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The autograder will also error if a question that is not in the tests file is passed as the first argument." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this will error\n", "grader.check(\"question1\", 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scoring Assignments\n", "\n", "For the scored autograder, you can have students run `Notebook.score()` to see their score at the end of the notebook. It is best that students wait until the end because in the beginning their scores will be very low as they haven't answered any questions yet." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scored_grader.score()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Tests File\n", "\n", "The tests file contains the answers to your multiple choice questions. You can name the file however you like, but the relative path to the file from the notebook needs to be passed to the `Notebook` instance you create when initializing the autograder. If you want your answer key to be hard to find, I recommend making it a hidden file (i.e. `.tests.txt` instead of `tests.txt`). This repo has an unhidden key so that you can see the structure of the key. When I deploy the autograder, I always use a hidden file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## You Try\n", "\n", "In this section of the notebook, you will write your own question and use Python to append it to the answer key.\n", "\n", "### Step 1: Create your Question\n", "\n", "In the Markdown cell below, type your question." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Type your question here, replacing this text._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: Create an Identifier and an Answer\n", "\n", "Now, assign `identifier` to the identifier string for your question and `answer` to the correct answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "identifier = ...\n", "answer = ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3: Add your Answer to the Tests File\n", "\n", "Now you need to create your own tests file. Right click on the Jupyter icon in the top left of this notebook to open up your server in a new tab. You should see a folder call `demo`; open it. In this folder, create a new text file and then open it in the editor. Set the file name to `my_tests.py` and then copy the code below into the file, replacing `_identifier_`, `_answer_`, and `_points_` with the values you defined above and the number of points you want your question to be worth.\n", "\n", "```python\n", "answers = [\n", " {\n", " \"identifier\": _identifier_,\n", " \"answer\": _answer_,\n", " \"points\": _points_\n", " }\n", "]\n", "```\n", "\n", "You can continue on in this notebook to load your autograder once that file is saved." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4: Initialize the Autograder\n", "\n", "Now we can initialize the autograder. Decided whether or not you want the autograder to score the assignment and how many retakes you want. If you don't care how many retakes are allowed, set it to its default `\"inf\"`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "my_grader = mcautograder.Notebook(\"my_tests.py\", scored = ..., max_attempts = ...)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 5: Run the Check\n", "\n", "Finally, verify that your question is correct in the cell below. You can play around with the values to see how the function behaves based on different types of inputs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "my_grader.check(identifier, answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 6: Score the Assignment (if applicable)\n", "\n", "Run `Notebook.score()` to get the score. **It will be low because you haven't answered the other questions since reinitializing the autograder.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "my_grader.score()" ] } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }