如何解决聚类看起来不正确
我不知道我的代码哪里出错了我没有在我的情节中显示所有 4 个集群。有什么想法吗?
Scrolling
解决方法
您的数据是连续的还是分类的?它似乎是分类的。计算二元变量之间的距离没有多大意义。并非所有数据都适合聚类。
我没有您的实际数据,但我将向您展示如何使用规范的 MTCars 示例数据正确和错误地进行聚类。
# import mtcars data from web,and do some clustering on the data set
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cluster import MiniBatchKMeans
# Import CSV mtcars
data = pd.read_csv('https://gist.githubusercontent.com/ZeccaLehn/4e06d2575eb9589dbe8c365d61cb056c/raw/64f1660f38ef523b2a1a13be77b002b98665cdfe/mtcars.csv')
# Edit element of column header
data.rename(columns={'Unnamed: 0':'brand'},inplace=True)
X1= data.iloc[:,1:12]
Y1= data.iloc[:,-1]
#lets try to plot Decision tree to find the feature importance
from sklearn.tree import DecisionTreeClassifier
tree= DecisionTreeClassifier(criterion='entropy',random_state=1)
tree.fit(X1,Y1)
imp= pd.DataFrame(index=X1.columns,data=tree.feature_importances_,columns=['Imp'] )
imp.sort_values(by='Imp',ascending=False)
sns.barplot(x=imp.index.tolist(),y=imp.values.ravel(),palette='coolwarm')
X=data[['cyl','drat']]
Y=data['carb']
#lets try to create segments using K means clustering
from sklearn.cluster import KMeans
#using elbow method to find no of clusters
wcss=[]
for i in range(1,7):
kmeans= KMeans(n_clusters=i,init='k-means++',random_state=1)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1,7),wcss,linestyle='--',marker='o',label='WCSS value')
plt.title('WCSS value- Elbow method')
plt.xlabel('no of clusters- K value')
plt.ylabel('Wcss value')
plt.legend()
plt.show()
kmeans.predict(X)
#Cluster Center
kmeans = MiniBatchKMeans(n_clusters = 5)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
print(centroids)
print(labels)
colors = ["green","red","blue","yellow","orange"]
plt.scatter(X.iloc[:,0],X.iloc[:,1],c=np.array(colors)[labels],s = 10,alpha=.1)
plt.scatter(centroids[:,centroids[:,marker = "x",s=150,linewidths = 5,zorder = 10,c=colors)
plt.show()
...now,I am just changing two features (two independent variables)...and re-running the same experiment...
X=data[['wt','qsec']]
Y=data['carb']
#lets try to create segments using K means clustering
from sklearn.cluster import KMeans
#using elbow method to find no of clusters
wcss=[]
for i in range(1,c=colors)
plt.show()
如您所见,用于聚类的特征的选择会对结果产生巨大影响(显然)。第一个示例看起来有点像您的结果,第二个示例看起来更像是一个更有用/更有趣的聚类实验。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。