微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

用于不平衡数据的二元分类的低 G-mean 和 MCC

如何解决用于不平衡数据的二元分类的低 G-mean 和 MCC

我人为地增加了不平衡率,以显示不同流行评分指标对分类性能的影响。此外,我人为地添加了一些缺失值以查看我的管道是否正常工作。但是,我看到 Matthews 相关系数和 G 均值的值非常低,而 ROC-AUC、平均精度和加权 F1 值非常高。这些性能指标在评估不平衡数据分类问题时非常流行。我认为 ROC-AUC、平均精度和加权 F1 未能量化类不平衡问题。我很想知道对此的可能解释!我不确定我应该报告哪个指标。我最感兴趣的是这里的阳性(少数)案例!

from sklearn.metrics import make_scorer
from sklearn.datasets import load_breast_cancer
from imblearn.metrics import geometric_mean_score
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_validate
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MaxAbsScaler
from sklearn.svm import SVC
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import f1_score

X,y = load_breast_cancer(return_X_y=True)

EXT_IMB_RATE = 0.025

#RANDOMLY UNDERSAMPLE DATA SET TO MAKE IT HIGHLY IMBALANCED 
minIdx = np.where(y==0)[0]
majIdx = np.where(y==1)[0]
ssMinIdx = np.random.choice(minIdx,int(np.round((212+357)*EXT_IMB_RATE)))
#     print(len(ssMinIdx))
y_ExtImb = np.append(y[majIdx],y[ssMinIdx])
#     print(len(yExtImb))
X_ExtImb = np.concatenate((X[majIdx],X[ssMinIdx]),axis=0)
print(np.bincount(y_ExtImb))

rng = np.random.RandomState(42)
def add_missing_values(X_full,y_full):
    n_samples,n_features = X_full.shape

    # Add missing values in 75% of the lines
    missing_rate = 0.25
    n_missing_samples = int(n_samples * missing_rate)

    missing_samples = np.zeros(n_samples,dtype=bool)
    missing_samples[: n_missing_samples] = True

    rng.shuffle(missing_samples)
    missing_features = rng.randint(0,n_features,n_missing_samples)
    X_missing = X_full.copy()
    X_missing[missing_samples,missing_features] = np.nan
    y_missing = y_full.copy()

    return X_missing,y_missing


X_miss,y_miss = add_missing_values(X_ExtImb,y_ExtImb)
np.count_nonzero(np.isnan(X_miss))
LR_pipe =  Pipeline([("impute",SimpleImputer(strategy='constant',fill_value= 0)),("scale",MaxAbsScaler()),("SVC",SVC())])
gmean = make_scorer(geometric_mean_score,greater_is_better=True)
MCC = make_scorer(matthews_corrcoef,greater_is_better=True)
scores = cross_validate(LR_pipe,X_miss,y_miss,cv=5,scoring={'G-mean': gmean,'F1':'f1_weighted','MCC': MCC,'AUC': 'roc_auc','Avg_Precision': 'average_precision'}
)

sorted(scores.keys())
SVC_Gmean = scores['test_G-mean'].mean()
SVC_MCC = scores['test_MCC'].mean()
SVC_AUC = scores['test_AUC'].mean()
SVC_precision = scores['test_Avg_Precision'].mean()
SVC_F1 = scores['test_F1'].mean()

print("MCC: %f" % (SVC_MCC))
print("G-mean: %f" % (SVC_Gmean))
print("F1 score: %f" % (SVC_F1 ))
print("AUC: %f" % (SVC_AUC))
print("Average Precision: %f" % (SVC_precision))

我的结果:

MCC: 0.552093
G-mean: 0.557539
F1 score: 0.972603
AUC: 0.985915
Average Precision: 0.999365

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。