{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 关联分析 \n", "\n", "# 引子\n", "\n", "今天我们考虑另一个问题,在餐饮企业中:\n", "**如何根据在大量的历史菜单数据,挖掘出客户点餐的规则,也就是说,当他下了某个菜品的订单时推荐相关联的菜品?** \n", "\n", "这样的问题可以通过**关联分析**来解决。\n", "\n", "**关联规则分析**也称为购物篮分析,最早是为了发现超市销售数据库中不同的商品之间的关联关系。例如,一个超市的经理想要更多地了解顾客的购物习惯,比如“哪组商品可能会在一次购物中同时购买?”或者“某顾客购买了个人电脑,那该顾客三个月后购买数码相机的概率有多大?”他可能会发现如果购买了面包的顾客同时非常有可能会购买牛奶,这就导出了一条关联规则**“面包=>牛奶”**,其中面包称为规则的**前项**,而牛奶称为**后项**。 \n", "\n", "\n", "常用的关联算法有:\n", "- Apriori:关联规则最常用也是最经典的算法,其核心思想是通过连接产生*候选项及其**支持度**然后通过剪枝生成**频繁项集** \n", "- FP-Tree \n", "- Eclat算法 \n", "- 灰色关联法 \n", "\n", "我们今天详细介绍第一种:**Apriori算法**\n", "\n", "今天我们介绍以下几个问题: \n", "\n", "** 1. 频繁模式** \n", "**2. 关联规则** \n", "**3. 关联分析** \n", "\n", "# 1. 频繁模式\n", "首先介绍几个概念。 \n", "\n", "\n", "# 频繁模式\n", "即频繁出现在数据集中的模式。例如频繁项集、频繁序列\n", "- 项集;项集是项的集合。包含k个项的项集称为k项集,如集合{牛奶,麦片,糖}是一个3项集。\n", " \n", "# 支持度\n", "项集A、B同时发生的概率称为关联规则的支持度 \n", "(X, Y)频繁项集的支持度为:$$support=P(XY)=\\frac{N_{(x\\bigcup{y})}}{N}$$ \n", "\n", "# 置信度\n", " \n", "项集A发生,则项集B发生的概率为关联规则的置信度。$$ c(x\\Rightarrow y)= P(Y|X) = \\frac{N_{(x\\bigcup y)}}{N(x)}$$\n", "# 最小支持度和最小置信度 \n", " \n", "最小支持度是用户或专家定义的衡量支持度的一个阈值,表示项目集在统计意义上的最低重要性;最小置信度是用户或专家定义的衡量置信度的一个阈值,表示关联规则的最低可靠性。同时满足最小支持度阈值和最小置信度阈值的规则称作强规则。\n", " \n", "# 事务数据集\n", "我们先看这样一个数据集:这个数据记录的是超市的订单数据。tid指的是用户id。\n", "\n", "将这个数据称为**事务数据集**,也就是看每一名顾客购买了什么。\n", "\n", "这里我们就可以计算{橙汁, 洗洁精}这个项集的支持度了: \n", "**支持度 =3/6= 50%。** \n", "\n", "# 二元表示\n", "我们进一步将事务数据集转化为以下这种二元表示。\n", " \n", "\n", "在上面这个例子中,{橙汁, 洗洁精}的支持度为50%,也就是说,橙汁和洗洁精同时出现的概率为百分之五十。那么,我们是否可以推测出,购买了橙汁,就会购买洗洁精呢?如果这个推测成立的话,这就是一条**关联规则**。\n", "\n", "# 2. 关联规则\n", "- 关联是事务数据中存在于一部分物品集合和另一部分物品集合之间的相关性或因果结构。\n", "- 这种关联可以用关联规则形式来表示。\n", " \n", "在这个例子中,如果我们说 **{橙汁} ---> {洗洁精}** 是一条关联规则,那么它的置信度为$$c(橙汁\\Rightarrow 洗洁精)= P(洗洁精|橙汁) = \\frac{N_{(洗洁精\\bigcup 橙汁)}}{N(橙汁)}=\\frac{3}{5}=60{\\%}$$\n", "$$c(洗洁精\\Rightarrow 橙汁)= P(橙汁|洗洁精) = \\frac{N_{(橙汁\\bigcup 洗洁精)}}{N(洗洁精)}=\\frac{3}{3}=100{\\%}$$\n", "\n", "# 3. 关联分析 \n", "所谓关联分析,就是指发现满足最小支持度与最小置信度的关联规则。它包含两个步骤:\n", "1. 发现频繁项目集(或称大项目集):即发现哪些东西会一起购买。\n", "2. 根据频繁项目集,产生置信度大于最小置信度的关联规则 \n", "\n", "接下来我们就来介绍关联算法中的**Apriori算法**\n", "\n", "# 4. Apriori算法 \n", "Apriori算法的主要思想是找出存在于事务数据集中的最大的频繁项集,再利用得到的最大频繁项集与预先设定的最小置信度阈值生成强关联规则。\n", "\n", "Apriori算法的实现有两个过程 \n", "\n", "**过程一:找出所有的频繁项集,最终得到最大频繁项集$L_k$**\n", "- 1.产生频繁一项集L1(k=1)\n", "- 2.由两个k项频繁项集合并形成一个k+1 项频繁项集候选Ck+1。\n", "- 3.剪枝Ck+1,形成k+1项频繁项集\n", "- 4.k=k+1,重复2、3步,直到3中Ck+1为空集或者K为最大项数。\n", "\n", "我们用一个具体的例子来看。例如以下数据集。\n", " \n", "假定最小支持度设定为:22%(绝对次数为2)\n", "首先产生频繁一项集L1。然后将两个1项频繁项集合并形成一个2项频繁项集候选C2。这里我们需要进行**剪枝**,也就是将支持度计数小于2的项集删去,得到L2。\n", "\n", "然后将两个2项频繁项集合并形成一个3项频繁项集候选C3。其中,带有红色标识的部分需要删去,这是因为在C2中,我们已经删去了支持度小于2的项集,所以带红标识的项集也不应该保留。得到最终的C3。同理得到C4为空集。所以最终的频繁项集为$L=L1\\bigcup L2 \\bigcup L3$,其中L3是最大的频繁项集。\n", "\n", "**由频繁项集产生关联规则** \n", "\n", "对每个大项目集 l 产生其所有的子集,对每个子集a,检验规则$$a\\Rightarrow (l-a)$$的置信度并保留满足最小置信度阈值的规则。 \n", "\n", "假定X', X''是X的子集, X= X'UX''\n", "\n", "CONF(X$\\Rightarrow$Y)=supp(XY)/supp(X) \n", "CONF(X'$\\Rightarrow$YUX'')= supp(XY)/supp(X')≤ CONF(X$\\Rightarrow$Y)\n", "\n", "IF 规则X$\\Rightarrow$Y不满足置信度阀值\n", "THEN X'$\\Rightarrow$YUX''的规则一定也不满足置信度阀值\n", "例如,假定最小置信度为60%\n", "\n", "\n", "就第一条关联规则进行解释:客户点了菜品2和3,再点菜品2的概率是50%。\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. 实例" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "转换原始数据至0-1矩阵...\n", "\n", "正在进行第1次搜索...\n", "数目:6...\n", "\n", "第1次搜索出结果啦!!!\n", "\n", "结果为:\n", " support confidence\n", "e---a 0.3 1.000000\n", "e---c 0.3 1.000000\n", "a---b 0.5 0.714286\n", "c---a 0.5 0.714286\n", "a---c 0.5 0.714286\n", "c---b 0.5 0.714286\n", "b---a 0.5 0.625000\n", "b---c 0.5 0.625000\n", "\n", "正在进行第2次搜索...\n", "数目:3...\n", "\n", "第2次搜索出结果啦!!!\n", "\n", "结果为:\n", " support confidence\n", "e---a 0.3 1.000000\n", "e---c 0.3 1.000000\n", "c---e---a 0.3 1.000000\n", "a---e---c 0.3 1.000000\n", "a---b 0.5 0.714286\n", "c---a 0.5 0.714286\n", "a---c 0.5 0.714286\n", "c---b 0.5 0.714286\n", "b---a 0.5 0.625000\n", "b---c 0.5 0.625000\n", "b---c---a 0.3 0.600000\n", "a---c---b 0.3 0.600000\n", "a---b---c 0.3 0.600000\n", "a---c---e 0.3 0.600000\n", "\n", "正在进行第3次搜索...\n", "数目:0...\n", "\n", "第3次搜索出结果啦!!!\n", "\n", "结果为:\n", " support confidence\n", "e---a 0.3 1.000000\n", "e---c 0.3 1.000000\n", "c---e---a 0.3 1.000000\n", "a---e---c 0.3 1.000000\n", "a---b 0.5 0.714286\n", "c---a 0.5 0.714286\n", "a---c 0.5 0.714286\n", "c---b 0.5 0.714286\n", "b---a 0.5 0.625000\n", "b---c 0.5 0.625000\n", "b---c---a 0.3 0.600000\n", "a---c---b 0.3 0.600000\n", "a---b---c 0.3 0.600000\n", "a---c---e 0.3 0.600000\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "C:\\Anaconda3\\lib\\site-packages\\ipykernel\\__main__.py:50: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)\n" ] }, { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
supportconfidence
e---a0.31.000000
e---c0.31.000000
c---e---a0.31.000000
a---e---c0.31.000000
a---b0.50.714286
c---a0.50.714286
a---c0.50.714286
c---b0.50.714286
b---a0.50.625000
b---c0.50.625000
b---c---a0.30.600000
a---c---b0.30.600000
a---b---c0.30.600000
a---c---e0.30.600000
\n", "
" ], "text/plain": [ " support confidence\n", "e---a 0.3 1.000000\n", "e---c 0.3 1.000000\n", "c---e---a 0.3 1.000000\n", "a---e---c 0.3 1.000000\n", "a---b 0.5 0.714286\n", "c---a 0.5 0.714286\n", "a---c 0.5 0.714286\n", "c---b 0.5 0.714286\n", "b---a 0.5 0.625000\n", "b---c 0.5 0.625000\n", "b---c---a 0.3 0.600000\n", "a---c---b 0.3 0.600000\n", "a---b---c 0.3 0.600000\n", "a---c---e 0.3 0.600000" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#########定义Apriori算法#########\n", "import pandas as pd\n", "\n", "\n", "# 定义连接函数,用于实现L_{k-1}到C_k的连接\n", "#其中,ms表示连接符,默认'--',用来区分不同元素,如A--B。\n", "# x为数据\n", "def connect_string(x, ms):\n", " x = list(map(lambda i:sorted(i.split(ms)), x))#拆分x中的元素\n", " l = len(x[0])#项集元素的个数,如1项集则有1个元素\n", " r = []# 用于存放C_k\n", " for i in range(len(x)):#len(x)为项集的个数\n", " for j in range(i, len(x)):\n", " if x[i][:l-1] == x[j][:l-1] and x[i][l-1] != x[j][l-1]:\n", " r.append(x[i][:l-1]+sorted([x[j][l-1],x[i][l-1]]))\n", " return r\n", "\n", "#寻找关联规则的函数\n", "def find_rule(d, support, confidence, ms = u'--'):\n", " result = pd.DataFrame(index=['support', 'confidence']) #定义输出结果\n", "\n", " support_series = 1.0*d.sum()/len(d) #支持度序列\n", " column = list(support_series[support_series > support].index) #初步根据支持度筛选\n", " k = 0\n", "\n", "\n", " while len(column) > 1:\n", " k = k+1\n", " print(u'\\n正在进行第%s次搜索...' %k)\n", " column = connect_string(column, ms)\n", " print(u'数目:%s...' %len(column))\n", " sf = lambda i: d[i].prod(axis=1, numeric_only = True) #新一批支持度的计算函数,求P(XY)\n", " #创建连接数据,这一步耗时、耗内存最严重。当数据集较大时,可以考虑并行运算优化。\n", " d_2 = pd.DataFrame(list(map(sf,column)), index = [ms.join(i) for i in column]).T\n", " support_series_2 = 1.0*d_2[[ms.join(i) for i in column]].sum()/len(d) #计算连接后的支持度\n", " column = list(support_series_2[support_series_2 > support].index) #新一轮支持度筛选\n", " support_series = support_series.append(support_series_2)\n", " column2 = []\n", " for i in column: #遍历可能的推理,如{A,B,C}究竟是A+B-->C还是B+C-->A还是C+A-->B?\n", " i = i.split(ms)\n", " for j in range(len(i)):\n", " column2.append(i[:j]+i[j+1:]+i[j:j+1])\n", " cofidence_series = pd.Series(index=[ms.join(i) for i in column2]) #定义置信度序列\n", " for i in column2: #计算置信度序列\n", " cofidence_series[ms.join(i)] = support_series[ms.join(sorted(i))]/support_series[ms.join(i[:len(i)-1])]\n", " for i in cofidence_series[cofidence_series > confidence].index: #置信度筛选\n", " result[i] = 0.0\n", " result[i]['confidence'] = cofidence_series[i]\n", " result[i]['support'] = support_series[ms.join(sorted(i.split(ms)))]\n", " result = result.T.sort(['confidence','support'], ascending = False) #结果整理,输出\n", " print(u'\\n第%s次搜索出结果啦!!!' %k)\n", " print(u'\\n结果为:')\n", " print(result)\n", " result = result.T\n", " return result.T\n", "#########定义Apriori算法#########\n", "\n", "#########运用Apriori算法进行关联分析#########\n", "#导入数据\n", "data = pd.read_excel('data/menu_orders.xls', header = None)#原始数据集是事务数据集\n", "print(u'\\n转换原始数据至0-1矩阵...')\n", "\n", "#将事务数据集转化为二元表示\n", "ct = lambda x : pd.Series(1, index = x[pd.notnull(x)]) #用匿名函数转换0-1矩阵的过渡函数,里面的取值变为了index\n", "b = map(ct, data.as_matrix()) #转换为numpy的array形式,然后再用map方式执行匿名函数\n", "#这里为什么要用as_matrix转换?因为map函数第二个参数一般为数组\n", "data1 = pd.DataFrame(list(b)).fillna(0)#map结果需要用list()才能显示\n", "\n", "support = 0.2#定义支持度\n", "confidence = 0.5 #最小置信度\n", "ms = '---'#定义连接符\n", "\n", "find_rule(data1, support, confidence, ms)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "其中,e---a表示e发生能够推出a发生,置信度为100%,支持度为30%;b---c---a表示b、 c同时发生时能够推出a发生,置信度为60%,支持度为30%等。搜索出来的关联规则不一定 具有实际意义,需要根据问题背景筛选适当的有意义的规则,并赋予合理的解释。" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }