Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

  • 数据预处理
  • 拆分数据集
  • 选择最好的预测诊断算法
  • 算法融合

数据探索见:python:乳腺癌预测之数据探索

实验器材

● UCI

● python

● seaborn

进群:548377875   即可获取数十套PDF哦!

实验内容

数据预处理

对诊断结果进行二值话,方便适应所有的预测算法。同时采用 preprocessing.scale 进行量化处理

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

拆分数据集

按照80/20方式,拆分训练集和测试机。 训练集按照交叉验证方式进行训练。

train_x1,test_x1,train_y,test_y = train_test_split(x_value_scaled,y_values,test_size=0.2)

选择最好的预测诊断算法

本次实验分别实验了 逻辑回归,随机森林,svm,线性SVM,决策树,高斯贝叶斯,梯度迭代决策树 几种算法。并利用learning_curve来判断是否过拟合。

本次预测实验的评价标准为预测精确度。

先来定义几个常用的函数

  • learning_curve

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

  • 混淆矩阵

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

  • 不同的算法精确度比较,精确度分别计算训练集的精确度和十折交叉的精确度

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

1

来看看初步的算法筛选

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

总体上看逻辑回归,svm 效果是最好的。看看是不是存在过拟合的情况

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

并没有存在过拟合的情况。

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

随机森林和决策树(未调参)情况下,存在一定的过拟合情况。

2

超参数调整

从上面的实验情况看,LR和SVM是比较好的精度。现在来进一步调整下这两个算法的超参数。

  • SVM超参数选择:

C : float,optional (default=1.0)

Penalty parameter C of the error term. C越小决策平面越光滑,因为对误分类的惩罚较小,C越大越倾向于精确地分类,并且此时有更多的自由去选择更多的向量作为支持向量

kernel : string,optional (default=’rbf’)

Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’,‘poly’,‘rbf’,‘sigmoid’,‘precomputed’ or a callable. If none is given,‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples,n_samples).

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

在测试集上,有3个恶性的被判定为良性。

  • LR超参数

C : float,default: 1.0 含义如SVM的C

Inverse of regularization strength; must be a positive float. like in support vector machines,smaller values specify stronger regularization.

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

测试集合判断的情况是一样的。

3

算法融合

采用VotingClassifier的硬分类方式。采用SVM和LR融合

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

可以加入不同方式的算法来进一步融合,这里选择KNN

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

Python应用于乳腺癌预测!为何Python会这么叼呢?你还不来分羹?

  • 在多次数据拆分训练集和测试集,下融合svm+LR+KNN可以达到99%的预测率。
  • 另外实验过程中还去除了相关性较强的几个熟悉,并没有对选用的算法并没有影响。

实验结语

本实验『WedO实验君』和大家一个做了乳腺癌的预测,采用不同的算法融合的策略。关键点为:过拟合判断,超参数筛选,算法融合。


附上jupyter notebook代码

# coding: utf-8

# In[1]:

import itertools

import pandas as pd

import numpy as np

import seaborn as sns

import matplotlib.pyplot as plt

from sklearn import preprocessing

from sklearn import model_selection

from sklearn.model_selection import train_test_split

from sklearn.decomposition import PCA

from sklearn.metrics import confusion_matrix,make_scorer,accuracy_score

from sklearn.model_selection import gridsearchcv,learning_curve

from sklearn.linear_model import LogisticRegression

from sklearn.tree import DecisionTreeClassifier

from sklearn.neighbors import KNeighborsClassifier

from sklearn.discriminant_analysis import LineardiscriminantAnalysis

from sklearn.naive_bayes import GaussianNB

from sklearn.svm import SVC,LinearSVC

from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,VotingClassifier

from sklearn.neural_network import MLPClassifier as MLPC

get_ipython().run_line_magic('matplotlib','inline')

# In[2]:

data = pd.read_csv('f:/dm/data.csv')

col = data.columns

col

# In[3]:

data.isnull().sum()

# In[4]:

data.head()

# In[5]:

data.info()

# In[6]:

x_values = data.drop(['diagnosis','id'],axis = 1)

y_values = data['diagnosis']

# In[7]:

data.describe()

# In[8]:

def plot_Box(data,cols = 3):

size = len(data.columns)

rows = size//cols + 1

fig = plt.figure(figsize=(13,10))

cnt = 1

for col_name in data.columns:

ax = fig.add_subplot(rows,cols,cnt)

plt.Boxplot(data[col_name])

ax.set_xlabel(col_name)

cnt = cnt + 1

plt.tight_layout()

plt.show()

plot_Box(x_values.iloc[:,0:8],4)

# In[9]:

plot_Box(x_values.iloc[:,8:16],4)

# In[10]:

plot_Box(x_values.iloc[:,16:32],4)

# In[11]:

def plot_distribution(data,target_col):

sns.set_style("whitegrid")

for col_name in data.columns:

if col_name != target_col:

title=("# of %s vs %s " % (col_name,target_col))

distributionOne = sns.FacetGrid(data,hue=target_col,aspect=2.5)

distributionOne.map(plt.hist,col_name,bins=30)

distributionOne.add_legend()

distributionOne.set_axis_labels(col_name,'Count')

distributionOne.fig.suptitle(title)

distributionTwo = sns.FacetGrid(data,aspect=2.5)

distributionTwo.map(sns.kdeplot,shade= True)

distributionTwo.set(xlim=(0,data[col_name].max()))

distributionTwo.add_legend()

distributionTwo.set_axis_labels(col_name,'Proportion')

distributionTwo.fig.suptitle(title)

plot_distribution(data,'diagnosis')

# In[12]:

g = sns.heatmap(x_values.corr(),cmap="BrBG",annot=False)

# In[13]:

plot_distribution(data[( data['area_mean'] > 500 ) & (data['area_mean'] < 800)],'diagnosis')

# In[14]:

g = sns.heatmap(x_values.iloc[:,1:10].corr(),annot=False)

# In[15]:

def diagnosis_to_binary(data):

data["diagnosis"] = data["diagnosis"].astype("category")

data["diagnosis"].cat.categories = [0,1]

data["diagnosis"] = data["diagnosis"].astype("int")

diagnosis_to_binary(data)

x_values = data.drop(['diagnosis',axis = 1)

y_values = data['diagnosis']

x_value_scaled = preprocessing.scale(x_values)

x_value_scaled = pd.DataFrame(x_value_scaled,columns = x_values.columns,index=data["id"])

x_value_all = x_value_scaled

#x_value_all['diag'] = y_values.tolist()

#x_value_all.head()

# In[16]:

#x_value_scaled.groupby([u'diag']).agg({ 'compactness_mean': [np.mean]}).reset_index()

# In[17]:

variance_pct = .99 # Minimum percentage of variance we want to be described by the resulting transformed components

pca = PCA(n_components=variance_pct) # Create PCA object

x_transformed = pca.fit_transform(x_value_scaled,y_values) # Transform the initial features

x_values_scaled_PCA = pd.DataFrame(x_transformed) # Create a data frame from the PCA'd data

# In[18]:

g = sns.heatmap(x_values_scaled_PCA.corr(),annot=False)

# ## 拆分数据集合

# In[19]:

x_value_scaled.head()

# In[20]:

y_values.head()

# In[21]:

def plot_learning_curve(estimator,title,X,y,ylim=None,cv=None,

n_jobs=1,train_sizes=np.linspace(.1,1.0,5)):

"""

Plots a learning curve. http://scikit-learn.org/stable/modules/learning_curve.html

"""

plt.figure()

plt.title(title)

if ylim is not None:

plt.ylim(*ylim)

plt.xlabel("Training examples")

plt.ylabel("score")

train_sizes,train_scores,test_scores = learning_curve(

estimator,cv=cv,n_jobs=n_jobs,train_sizes=train_sizes)

train_scores_mean = np.mean(train_scores,axis=1)

train_scores_std = np.std(train_scores,axis=1)

test_scores_mean = np.mean(test_scores,axis=1)

test_scores_std = np.std(test_scores,axis=1)

plt.grid()

plt.fill_between(train_sizes,train_scores_mean - train_scores_std,

train_scores_mean + train_scores_std,alpha=0.1,

color="r")

plt.fill_between(train_sizes,test_scores_mean - test_scores_std,

test_scores_mean + test_scores_std,color="g")

plt.plot(train_sizes,train_scores_mean,'o-',color="r",

label="Training score")

plt.plot(train_sizes,test_scores_mean,color="g",

label="cross-validation score")

plt.legend(loc="best")

return plt

def plot_confusion_matrix(cm,classes,

normalize=False,

title='Confusion matrix',

cmap=plt.cm.Blues):

"""

http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html

"""

if normalize:

cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]

print("normalized confusion matrix")

else:

print('Confusion matrix,without normalization')

print(cm)

plt.imshow(cm,interpolation='nearest',cmap=cmap)

plt.title(title)

plt.colorbar()

tick_marks = np.arange(len(classes))

plt.xticks(tick_marks,rotation=45)

plt.yticks(tick_marks,classes)

fmt = '.2f' if normalize else 'd'

thresh = cm.max() / 2.

for i,j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):

plt.text(j,i,format(cm[i,j],fmt),

horizontalalignment="center",

color="white" if cm[i,j] > thresh else "black")

plt.tight_layout()

plt.ylabel('True label')

plt.xlabel('Predicted label')

dict_characters = {1: 'Malignant',0: 'Benign'}

# In[22]:

def compareABunchOfDifferentModelsAccuracy(a,b,c,d):

"""

compare performance of classifiers on X_train,X_test,Y_train,Y_test

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score

http://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score

"""

print(' Compare Multiple Classifiers: ')

print('K-Fold cross-validation Accuracy: ')

names = []

models = []

resultsAccuracy = []

models.append(('LR',LogisticRegression()))

models.append(('RF',RandomForestClassifier()))

models.append(('KNN',KNeighborsClassifier()))

models.append(('SVM',SVC()))

models.append(('LSVM',LinearSVC()))

models.append(('GNB',GaussianNB()))

models.append(('DTC',DecisionTreeClassifier()))

models.append(('GBC',GradientBoostingClassifier()))

for name,model in models:

plot_learning_curve(model,'Learning Curve For %s Classifier'% (name),a,(0.8,1.1),10)

for name,model in models:

model.fit(a,b)

kfold = model_selection.KFold(n_splits=10,random_state=7)

accuracy_results = model_selection.cross_val_score(model,cv=kfold,scoring='accuracy')

resultsAccuracy.append(accuracy_results)

names.append(name)

accuracyMessage = "%s: %f (%f)" % (name,accuracy_results.mean(),accuracy_results.std())

print(accuracyMessage)

# Boxplot

fig = plt.figure()

fig.suptitle('Algorithm Comparison: Accuracy')

ax = fig.add_subplot(111)

plt.Boxplot(resultsAccuracy)

ax.set_xticklabels(names)

ax.set_ylabel('cross-validation: Accuracy score')

plt.show()

# In[56]:

train_x1,test_size=0.2)

# In[57]:

train_x1.columns

# In[58]:

#'texture_se','texture_worst'

drop_list = []

train_x = train_x1.drop(drop_list,axis = 1)

test_x = test_x1.drop(drop_list,axis = 1)

# In[59]:

compareABunchOfDifferentModelsAccuracy(train_x,None,None)

# In[60]:

def selectParametersForSVM(a,d):

model = SVC()

parameters = {'C': [ 0.01,0.1,0.5,5.0,10,25,50,100],

'kernel': ['linear','poly','rbf','sigmoid']}

accuracy_scorer = make_scorer(accuracy_score)

grid_obj = gridsearchcv(model,parameters,scoring=accuracy_scorer)

grid_obj = grid_obj.fit(a,b)

model = grid_obj.best_estimator_

model.fit(a,b)

print('Selected Parameters for SVM: ')

print(model," ")

kfold = model_selection.KFold(n_splits=10,random_state=7)

accuracy = model_selection.cross_val_score(model,scoring='accuracy')

mean = accuracy.mean()

stdev = accuracy.std()

print('Support Vector Machine - Training set accuracy: %s (%s)' % (mean,stdev))

print('')

prediction = model.predict(c)

#print(prediction[0])

cnf_matrix = confusion_matrix(d,prediction)

np.set_printoptions(precision=2)

class_names = dict_characters

plt.figure()

plot_confusion_matrix(cnf_matrix,classes=class_names,title='Confusion matrix')

plt.figure()

plot_confusion_matrix(cnf_matrix,normalize=True,

title='normalized confusion matrix')

plot_learning_curve(model,'Learning Curve For SVM Classifier',(0.85,10)

return prediction

# In[61]:

def selectParametersForLR(a,d):

model = LogisticRegression()

parameters = {'C': [ 0.01,100]}

accuracy_scorer = make_scorer(accuracy_score)

grid_obj = gridsearchcv(model,b)

print('Selected Parameters for LR: ')

print(model,'Learning Curve For LR Classifier',10)

return prediction

# In[62]:

prediction = selectParametersForLR(train_x,test_x,test_y)

# In[63]:

prediction = selectParametersForSVM(train_x,test_y)

x_err_data = pd.DataFrame(columns = train_x.columns)

real_ = test_y.tolist()

indexs = []

err_diag = []

k=0

for i in range(len(prediction)):

if prediction[i] != real_[i]:

x_err_data.loc[k] = test_x.iloc[i].tolist()

indexs.append(test_x.index[i])

err_diag.append(test_y.iloc[i])

k = k + 1

x_err_data.index = indexs

x_err_data["diag"] = err_diag

x_err_data

# In[64]:

data[data['id']==91594602]

# In[65]:

def selectParametersFormlPC(a,d):

"""http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.gridsearchcv.html

http://scikit-learn.org/stable/modules/grid_search.html#grid-search"""

model = MLPC()

parameters = {'verbose': [False],

'activation': ['logistic','relu'],

'max_iter': [1000,2000],'learning_rate': ['constant','adaptive']}

accuracy_scorer = make_scorer(accuracy_score)

grid_obj = gridsearchcv(model,b)

print('Selected Parameters for multi-layer Perceptron NN: ')

print(model)

print('')

kfold = model_selection.KFold(n_splits=10)

accuracy = model_selection.cross_val_score(model,scoring='accuracy')

mean = accuracy.mean()

stdev = accuracy.std()

print('SKlearn multi-layer Perceptron - Training set accuracy: %s (%s)' % (mean,stdev))

print('')

prediction = model.predict(c)

cnf_matrix = confusion_matrix(d,'Learning Curve For MLPC Classifier',1),10)

# In[66]:

selectParametersFormlPC(train_x,test_y)

# In[71]:

def runVotingClassifier(a,d):

"""http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html

http://scikit-learn.org/stable/modules/ensemble.html#voting-classifier"""

#global votingC,mean,stdev # eventually I should get rid of these global variables and use classes instead. in this case i need these variables for the submission function.

votingC = VotingClassifier(estimators=[('SVM',SVC(C=5.0,cache_size=200,class_weight=None,coef0=0.0,

decision_function_shape='ovr',degree=3,gamma='auto',kernel='rbf',

max_iter=-1,probability=False,random_state=None,shrinking=True,

tol=0.001,verbose=False)),('LR',LogisticRegression(C=0.1,dual=False,fit_intercept=True,

intercept_scaling=1,max_iter=100,multi_class='ovr',n_jobs=1,

penalty='l2',solver='liblinear',tol=0.0001,

verbose=0,warm_start=False)),('KNN',KNeighborsClassifier())],voting='hard')

votingC = votingC.fit(a,b)

kfold = model_selection.KFold(n_splits=10)

accuracy = model_selection.cross_val_score(votingC,scoring='accuracy')

meanC = accuracy.mean()

stdevC = accuracy.std()

print('Ensemble Voting Classifier - Training set accuracy: %s (%s)' % (meanC,stdevC))

print('')

#return votingC,meanC,stdevC

prediction = votingC.predict(c)

cnf_matrix = confusion_matrix(d,

title='normalized confusion matrix')

plot_learning_curve(votingC,'Learning Curve For Ensemble Voting Classifier',10)

# In[72]:

runVotingClassifier(train_x,test_y)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


我最近重新拾起了计算机视觉,借助Python的opencv还有face_recognition库写了个简单的图像识别demo,额外定制了一些内容,原本想打包成exe然后发给朋友,不过在这当中遇到了许多小问题,都解决了,记录一下踩过的坑。 1、Pyinstaller打包过程当中出现warning,跟d
说到Pooling,相信学习过CNN的朋友们都不会感到陌生。Pooling在中文当中的意思是“池化”,在神经网络当中非常常见,通常用的比较多的一种是Max Pooling,具体操作如下图: 结合图像理解,相信你也会大概明白其中的本意。不过Pooling并不是只可以选取2x2的窗口大小,即便是3x3,
记得大一学Python的时候,有一个题目是判断一个数是否是复数。当时觉得比较复杂不好写,就琢磨了一个偷懒的好办法,用异常处理的手段便可以大大程度帮助你简短代码(偷懒)。以下是判断整数和复数的两段小代码: 相信看到这里,你也有所顿悟,能拓展出更多有意思的方法~
文章目录 3 直方图Histogramplot1. 基本直方图的绘制 Basic histogram2. 数据分布与密度信息显示 Control rug and density on seaborn histogram3. 带箱形图的直方图 Histogram with a boxplot on t
文章目录 5 小提琴图Violinplot1. 基础小提琴图绘制 Basic violinplot2. 小提琴图样式自定义 Custom seaborn violinplot3. 小提琴图颜色自定义 Control color of seaborn violinplot4. 分组小提琴图 Group
文章目录 4 核密度图Densityplot1. 基础核密度图绘制 Basic density plot2. 核密度图的区间控制 Control bandwidth of density plot3. 多个变量的核密度图绘制 Density plot of several variables4. 边
首先 import tensorflow as tf tf.argmax(tenso,n)函数会返回tensor中参数指定的维度中的最大值的索引或者向量。当tensor为矩阵返回向量,tensor为向量返回索引号。其中n表示具体参数的维度。 以实际例子为说明: import tensorflow a
seaborn学习笔记章节 seaborn是一个基于matplotlib的Python数据可视化库。seaborn是matplotlib的高级封装,可以绘制有吸引力且信息丰富的统计图形。相对于matplotlib,seaborn语法更简洁,两者关系类似于numpy和pandas之间的关系,seabo
Python ConfigParser教程显示了如何使用ConfigParser在Python中使用配置文件。 文章目录 1 介绍1.1 Python ConfigParser读取文件1.2 Python ConfigParser中的节1.3 Python ConfigParser从字符串中读取数据
1. 处理Excel 电子表格笔记(第12章)(代码下载) 本文主要介绍openpyxl 的2.5.12版处理excel电子表格,原书是2.1.4 版,OpenPyXL 团队会经常发布新版本。不过不用担心,新版本应该在相当长的时间内向后兼容。如果你有新版本,想看看它提供了什么新功能,可以查看Open
1. 发送电子邮件和短信笔记(第16章)(代码下载) 1.1 发送电子邮件 简单邮件传输协议(SMTP)是用于发送电子邮件的协议。SMTP 规定电子邮件应该如何格式化、加密、在邮件服务器之间传递,以及在你点击发送后,计算机要处理的所有其他细节。。但是,你并不需要知道这些技术细节,因为Python 的
文章目录 12 绘图实例(4) Drawing example(4)1. Scatterplot with varying point sizes and hues(relplot)2. Scatterplot with categorical variables(swarmplot)3. Scat
文章目录 10 绘图实例(2) Drawing example(2)1. Grouped violinplots with split violins(violinplot)2. Annotated heatmaps(heatmap)3. Hexbin plot with marginal dist
文章目录 9 绘图实例(1) Drawing example(1)1. Anscombe’s quartet(lmplot)2. Color palette choices(barplot)3. Different cubehelix palettes(kdeplot)4. Distribution
Python装饰器教程展示了如何在Python中使用装饰器基本功能。 文章目录 1 使用教程1.1 Python装饰器简单示例1.2 带@符号的Python装饰器1.3 用参数修饰函数1.4 Python装饰器修改数据1.5 Python多层装饰器1.6 Python装饰器计时示例 2 参考 1 使
1. 用GUI 自动化控制键盘和鼠标第18章 (代码下载) pyautogui模块可以向Windows、OS X 和Linux 发送虚拟按键和鼠标点击。根据使用的操作系统,在安装pyautogui之前,可能需要安装一些其他模块。 Windows: 不需要安装其他模块。OS X: sudo pip3
文章目录 生成文件目录结构多图合并找出文件夹中相似图像 生成文件目录结构 生成文件夹或文件的目录结构,并保存结果。可选是否滤除目录,特定文件以及可以设定最大查找文件结构深度。效果如下: root:[z:/] |--a.py |--image | |--cat1.jpg | |--cat2.jpg |
文章目录 VENN DIAGRAM(维恩图)1. 具有2个分组的基本的维恩图 Venn diagram with 2 groups2. 具有3个组的基本维恩图 Venn diagram with 3 groups3. 自定义维恩图 Custom Venn diagram4. 精致的维恩图 Elabo
mxnet60分钟入门Gluon教程代码下载,适合做过深度学习的人使用。入门教程地址: https://beta.mxnet.io/guide/getting-started/crash-course/index.html mxnet安装方法:pip install mxnet 1 在mxnet中使
文章目录 1 安装2 快速入门2.1 基本用法2.2 输出图像格式2.3 图像style设置2.4 属性2.5 子图和聚类 3 实例4 如何进一步使用python graphviz Graphviz是一款能够自动排版的流程图绘图软件。python graphviz则是graphviz的python实