{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NDArray Tutorial\n", "\n", "\n", "In _MXNet_, `NDArray` is the core datastructure for all mathematical computations.\n", "An `NDArray` represents a multidimensional, fixed-size homogenous array.\n", "If you're familiar with the scientific computing python package [NumPy](http://www.numpy.org/),\n", "you might notice that `mxnet.ndarray` is similar to `numpy.ndarray`. \n", "Like the corresponding NumPy data structure, MXNet's `NDArray` enables imperative computation.\n", "\n", "So you might wonder, why not just use NumPy? \n", "MXNet offers two compelling advantages. \n", "First, MXNet's `NDArray` supports fast execution \n", "on a wide range of hardware configurations, \n", "including CPU, GPU, and multi-GPU machines.\n", "_MXNet_ also scales to distribute systems in the cloud. \n", "Second, MXNet's NDArray executes code lazily, \n", "allowing it to automatically parallelize multiple operations \n", "across the available hardware.\n", "\n", "## The basics\n", "\n", "An `NDArray` is a multidimensional array of numbers with the same type. \n", "We could represent the coordinates of a point in 3D space, \n", "e.g. `[2, 1, 6]` as a 1D array with shape (3). \n", "Similarly, we could represent a 2D array. \n", "Below, we present an array with length 2 along the first axis and length 3 along the second axis.\n", "```\n", "[[0, 1, 2]\n", " [3, 4, 5]]\n", "```\n", "Note that here the use of \"dimension\" is overloaded. \n", "When we say a 2D array, we mean an array with 2 axes, not an array with two components. \n", "\n", "Each NDArray supports some important attributes that you'll often want to query:\n", "\n", "- **ndarray.shape**: The dimensions of the array. It is a tuple of integers indicating the length of the array along each axis. For a matrix with `n` rows and `m` columns, its `shape` will be `(n, m)`. \n", "- **ndarray.dtype**: A `numpy` _type_ object describing the type of its elements.\n", "- **ndarray.size**: the total number of components in the array - equal to the product of the components of its `shape`\n", "- **ndarray.context**: The device on which this array is stored, e.g. `cpu()` or `gpu(1)`.\n", "\n", "### Array Creation \n", "There are a few different ways to create an `NDArray`. \n", "\n", "* We can create an NDArray from a regular Python list or tuple by using the `array` function:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'a.shape': (3L,), 'b.shape': (2L, 3L)}" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import mxnet as mx\n", "# create a 1-dimensional array with a python list\n", "a = mx.nd.array([1,2,3])\n", "# create a 2-dimensional array with a nested python list \n", "b = mx.nd.array([[1,2,3], [2,3,4]])\n", "{'a.shape':a.shape, 'b.shape':b.shape}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* We can also create an MXNet NDArray from an `numpy.ndarray` object:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'a.shape': (3L, 5L)}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import math\n", "c = np.arange(15).reshape(3,5)\n", "# create a 2-dimensional array from a numpy.ndarray object\n", "a = mx.nd.array(c)\n", "{'a.shape':a.shape}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can specify the element type with the option `dtype`, which accepts a numpy type. By default, `float32` is used: " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(numpy.float32, numpy.int32, numpy.float16)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# float32 is used in deafult\n", "a = mx.nd.array([1,2,3])\n", "# create an int32 array\n", "b = mx.nd.array([1,2,3], dtype=np.int32)\n", "# create a 16-bit float array\n", "c = mx.nd.array([1.2, 2.3], dtype=np.float16)\n", "(a.dtype, b.dtype, c.dtype)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we know the size of the desired NDArray, but not the element values, \n", "MXNet offers several functions to create arrays with placeholder content:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# create a 2-dimensional array full of zeros with shape (2,3) \n", "a = mx.nd.zeros((2,3))\n", "# create a same shape array full of ones\n", "b = mx.nd.ones((2,3))\n", "# create a same shape array with all elements set to 7\n", "c = mx.nd.full((2,3), 7)\n", "# create a same shape whose initial content is random and \n", "# depends on the state of the memory\n", "d = mx.nd.empty((2,3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Printing Arrays\n", "\n", "When inspecting the contents of an `NDArray`, it's often convenient \n", "to first extract its contents as a `numpy.ndarray` using the `asnumpy` function. \n", "Numpy uses the following layout:\n", "- The last axis is printed from left to right,\n", "- The second-to-last is printed from top to bottom,\n", "- The rest are also printed from top to bottom, with each slice separated from the next by an empty line." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[[ 0., 1., 2.],\n", " [ 3., 4., 5.]],\n", "\n", " [[ 6., 7., 8.],\n", " [ 9., 10., 11.]],\n", "\n", " [[ 12., 13., 14.],\n", " [ 15., 16., 17.]]], dtype=float32)" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = mx.nd.arange(18).reshape((3,2,3))\n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Basic Operations\n", "When applied to NDArrays, the standard arithmetic operators apply *elementwise* calculations. The returned value is a new array whose content contains the result." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 2., 2., 2.],\n", " [ 2., 2., 2.]], dtype=float32)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,3))\n", "b = mx.nd.ones((2,3))\n", "# elementwise plus\n", "c = a + b\n", "# elementwise minus\n", "d = - c \n", "# elementwise pow and sin, and then transpose\n", "e = mx.nd.sin(c**2).T\n", "# elementwise max\n", "f = mx.nd.maximum(a, c) \n", "f.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As in `NumPy`, `*` represents element-wise multiplication. For matrix-matrix multiplication, use `dot`." ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "b: [[ 0. 1.]\n", " [ 4. 9.]], \n", " c: [[ 2. 3.]\n", " [ 6. 11.]]\n" ] } ], "source": [ "a = mx.nd.arange(4).reshape((2,2))\n", "b = a * a\n", "c = mx.nd.dot(a,a)\n", "print(\"b: %s, \\n c: %s\" % (b.asnumpy(), c.asnumpy()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The assignment operators such as `+=` and `*=` modify arrays in place, and thus don't allocate new memory to create a new array." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 2., 2.],\n", " [ 2., 2.]], dtype=float32)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,2))\n", "b = mx.nd.ones(a.shape)\n", "b += a\n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Indexing and Slicing\n", "The slice operator `[]` applies on axis 0. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 1.],\n", " [ 1., 1.],\n", " [ 4., 5.]], dtype=float32)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.array(np.arange(6).reshape(3,2))\n", "a[1:2] = 1\n", "a[:].asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also slice a particular axis with the method `slice_axis`" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1.],\n", " [ 1.],\n", " [ 5.]], dtype=float32)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d = mx.nd.slice_axis(a, axis=1, begin=1, end=2)\n", "d.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Shape Manipulation \n", "Using `reshape`, we can manipulate any arrays shape as long as the size remains unchanged." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[[ 0., 1., 2., 3.],\n", " [ 4., 5., 6., 7.],\n", " [ 8., 9., 10., 11.]],\n", "\n", " [[ 12., 13., 14., 15.],\n", " [ 16., 17., 18., 19.],\n", " [ 20., 21., 22., 23.]]], dtype=float32)" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.array(np.arange(24))\n", "b = a.reshape((2,3,4))\n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `concatenate` method stacks multiple arrays along the first axis. (Their shapes must be the same along the other axes)." ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1., 1., 1.],\n", " [ 1., 1., 1.],\n", " [ 2., 2., 2.],\n", " [ 2., 2., 2.]], dtype=float32)" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,3))\n", "b = mx.nd.ones((2,3))*2\n", "c = mx.nd.concatenate([a,b])\n", "c.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reduce\n", "\n", "Some functions, like `sum` and `mean` reduce arrays to scalars." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 6.], dtype=float32)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,3))\n", "b = mx.nd.sum(a)\n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also reduce an array along a particular axis:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 3., 3.], dtype=float32)" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c = mx.nd.sum_axis(a, axis=1)\n", "c.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Broadcast\n", "\n", "We can also broadcast an array. Broadcasting operations, duplicate an array's value along an axis with length 1. The following code broadcasts along axis 1:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 0., 0., 0.],\n", " [ 1., 1., 1., 1.],\n", " [ 2., 2., 2., 2.],\n", " [ 3., 3., 3., 3.],\n", " [ 4., 4., 4., 4.],\n", " [ 5., 5., 5., 5.]], dtype=float32)" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.array(np.arange(6).reshape(6,1))\n", "b = a.broadcast_to((6,4)) # \n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's possible to simultaneously broadcast along multiple axes. In the following example, we broadcast along axes 1 and 2:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[[[ 0., 1., 2.],\n", " [ 0., 1., 2.]],\n", "\n", " [[ 0., 1., 2.],\n", " [ 0., 1., 2.]]],\n", "\n", "\n", " [[[ 3., 4., 5.],\n", " [ 3., 4., 5.]],\n", "\n", " [[ 3., 4., 5.],\n", " [ 3., 4., 5.]]]], dtype=float32)" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c = a.reshape((2,1,1,3))\n", "d = c.broadcast_to((2,2,2,3))\n", "d.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Broadcasting can be applied automatically when executing some operations, e.g. `*` and `+` on arrays of different shapes." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 2., 2.],\n", " [ 2., 2.],\n", " [ 2., 2.]], dtype=float32)" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((3,2))\n", "b = mx.nd.ones((1,2))\n", "c = a + b\n", "c.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Copies\n", "\n", "When assigning an NDArray to another Python variable, we copy a reference to the *same* NDArray. However, we often need to maek a copy of the data, so that we can manipulate the new array without overwriting the original values. " ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,2))\n", "b = a \n", "b is a" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `copy` method makes a deep copy of the array and its data:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = a.copy()\n", "b is a" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above code allocates a new NDArray and then assigns to *b*. When we do not want to allocate additional memory, we can use the `copyto` method or the slice operator `[]` instead." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(True, True)" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = mx.nd.ones(a.shape)\n", "c = b\n", "c[:] = a\n", "d = b\n", "a.copyto(d)\n", "(c is b, d is b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Advanced Topics\n", "MXNet's NDArray offers some advanced features that differentiate it from the offerings you'll find in most other libraries. \n", "\n", "### GPU Support\n", "\n", "By default, NDArray operators are executed on CPU. But with MXNet, it's easy to switch to another computation resource, such as GPU, when available. Each NDArray's device information is stored in `ndarray.context`. When MXNet is compiled with flag `USE_CUDA=1` and the machine has at least one NVIDIA GPU, we can cause all computations to run on GPU 0 by using context `mx.gpu(0)`, or simply `mx.gpu()`. When we have access to two or more GPUs, the 2nd GPU is represented by `mx.gpu(1)`, etc." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n" ] } ], "source": [ "def f():\n", " a = mx.nd.ones((100,100))\n", " b = mx.nd.ones((100,100))\n", " c = a + b\n", " print(c)\n", "# in default mx.cpu() is used\n", "f() \n", "# change the default context to the first GPU\n", "with mx.Context(mx.gpu()): \n", " f()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also explicitly specify the context when creating an array:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((100, 100), mx.gpu(0))\n", "a" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Currently, MXNet requires two arrays to sit on the same device for computation. There are several methods for copying data between devices." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'d': , 'e': }" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((100,100), mx.cpu())\n", "b = mx.nd.ones((100,100), mx.gpu())\n", "c = mx.nd.ones((100,100), mx.gpu())\n", "a.copyto(c) # copy from CPU to GPU\n", "d = b + c\n", "e = b.as_in_context(c.context) + c # same to above\n", "{'d':d, 'e':e}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Serialize From/To (Distributed) Filesystems\n", "\n", "MXNet offers two simple ways to save (load) data to (from) disk. The first way is to use `pickle`, as you might with any other Python objects. `NDArray` is pickle-compatible." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1., 1., 1.],\n", " [ 1., 1., 1.]], dtype=float32)" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pickle as pkl\n", "a = mx.nd.ones((2, 3))\n", "# pack and then dump into disk\n", "data = pkl.dumps(a)\n", "pkl.dump(data, open('tmp.pickle', 'wb'))\n", "# load from disk and then unpack \n", "data = pkl.load(open('tmp.pickle', 'rb'))\n", "b = pkl.loads(data)\n", "b.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second way is to directly dump to disk in binary format by using the `save` and `load` methods. We can save/load a single NDArray, or a list of NDArrays:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[, ]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a = mx.nd.ones((2,3))\n", "b = mx.nd.ones((5,6)) \n", "mx.nd.save(\"temp.ndarray\", [a,b])\n", "c = mx.nd.load(\"temp.ndarray\")\n", "c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's also possible to save or load a dict of NDArrays in this fashion:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'a': , 'b': }" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d = {'a':a, 'b':b}\n", "mx.nd.save(\"temp.ndarray\", d)\n", "c = mx.nd.load(\"temp.ndarray\")\n", "c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `load` and `save` methods are preferable to pickle in two respects\n", "1. When using these methods, you can save data from within the Python interface and then use it later from another lanuage's binding. For example, if we save the data in Python:\n", "```python\n", "a = mx.nd.ones((2, 3))\n", "mx.save(\"temp.ndarray\", [a,])\n", "```\n", "we can later load it from R:\n", "```R\n", "a <- mx.nd.load(\"temp.ndarray\")\n", "as.array(a[[1]])\n", "## [,1] [,2] [,3]\n", "## [1,] 1 1 1\n", "## [2,] 1 1 1\n", "```\n", "2. When a distributed filesystem such as Amazon S3 or Hadoop HDFS is set up, we can directly save to and load from it. \n", "```python\n", "mx.nd.save('s3://mybucket/mydata.ndarray', [a,]) # if compiled with USE_S3=1\n", "mx.nd.save('hdfs///users/myname/mydata.bin', [a,]) # if compiled with USE_HDFS=1\n", "```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lazy Evaluation and Automatic Parallelization *\n", "\n", "MXNet uses lazy evaluation to achieve superior performance. \n", "When we run `a=b+1` in Python, the Python thread \n", "just pushes this operation into the backend engine and then returns. \n", "There are two benefits for to this approach:\n", "1. The main Python thread can continue to execute other computations once the previous one is pushed. It is useful for frontend languages with heavy overheads. \n", "2. It is easier for the backend engine to explore further optimization, such as auto parallelization. \n", "\n", "The backend engine can resolve data dependencies and schedule the computations correctly. It is transparent to frontend users. We can explicitly call the method `wait_to_read` on the result array to wait until the computation finishes. Operations that copy data from an array to other packages, such as `asnumpy`, will implicitly call `wait_to_read`. \n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "time for all computations are pushed into the backend engine:\n", " 0.001089 sec\n", "time for all computations are finished:\n", " 5.398588 sec\n" ] } ], "source": [ "# @@@ AUTOTEST_OUTPUT_IGNORED_CELL\n", "import time\n", "\n", "def do(x, n):\n", " \"\"\"push computation into the backend engine\"\"\"\n", " return [mx.nd.dot(x,x) for i in range(n)]\n", "def wait(x):\n", " \"\"\"wait until all results are available\"\"\"\n", " for y in x:\n", " y.wait_to_read()\n", " \n", "tic = time.time()\n", "a = mx.nd.ones((1000,1000))\n", "b = do(a, 50)\n", "print('time for all computations are pushed into the backend engine:\\n %f sec' % (time.time() - tic))\n", "wait(b)\n", "print('time for all computations are finished:\\n %f sec' % (time.time() - tic))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Besides analyzing data read and write dependencies, the backend engine is able to schedule computations with no dependency in parallel. For example, in the following code:\n", "```python\n", "a = mx.nd.ones((2,3))\n", "b = a + 1\n", "c = a + 2\n", "d = b * c\n", "```\n", "the second and third lines can be executed in parallel. The following example first runs on CPU and then on GPU:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Time to finish the CPU workload: 1.089354 sec\n", "Time to finish both CPU/CPU workloads: 2.663608 sec\n" ] } ], "source": [ "# @@@ AUTOTEST_OUTPUT_IGNORED_CELL\n", "n = 10\n", "a = mx.nd.ones((1000,1000))\n", "b = mx.nd.ones((6000,6000), mx.gpu())\n", "tic = time.time()\n", "c = do(a, n)\n", "wait(c)\n", "print('Time to finish the CPU workload: %f sec' % (time.time() - tic))\n", "d = do(b, n)\n", "wait(d)\n", "print('Time to finish both CPU/CPU workloads: %f sec' % (time.time() - tic))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we issue all workloads at the same time. The backend engine will try to parallel the CPU and GPU computations." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Both as finished in: 1.543902 sec\n" ] } ], "source": [ "# @@@ AUTOTEST_OUTPUT_IGNORED_CELL\n", "tic = time.time()\n", "c = do(a, n)\n", "d = do(b, n)\n", "wait(c)\n", "wait(d)\n", "print('Both as finished in: %f sec' % (time.time() - tic))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Current Status\n", "\n", "Whenever possible, we try to keep the NDArray API as similar as possible to the NumPy API. \n", "But you should note that NDArray is not fully yet fully NumPy-compatible. \n", "Here we summary the major differences, which we hope to be resolve quickly. \n", "(If you'd like to contribute, [fork us on Github](https://github.com/dmlc/mxnet))\n", "\n", "- Slice and Index. \n", " - NDArray can only slice along one dimension at a time. Namely, we cannot use `x[:, 1]` to slice both dimensions.\n", " - Only continuous indices are supported, we cannot do `x[1:2:3]`\n", " - Boolean indices are not supported, e.g. `x[y==1]`.\n", "- Lack of reduce functions such as `max`, `min`...\n", "\n", "## Futher Readings\n", "- [NDArray API](http://mxnet.dmlc.ml/en/latest/packages/python/ndarray.html) Documents for all NDArray methods.\n", "- [MinPy](https://github.com/dmlc/minpy) on-going project, fully numpy compatible with GPU and auto differentiation supports " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 1 }