如何解决如何计算pyspark数据帧中值的条件概率?
我想通过pyspark中的列类型值计算收视率列中的收视率等级('A','B','C')的条件概率。
输入:
company model rating type
0 ford mustang A coupe
1 chevy camaro B coupe
2 ford fiesta C sedan
3 ford focus A sedan
4 ford taurus B sedan
5 toyota camry B sedan
输出:
rating type conditional_probability
0 A coupe 0.50
1 B coupe 0.33
2 C sedan 1.00
3 A sedan 0.50
4 B sedan 0.66
解决方法
您可以使用groupby
获取单独的rating
以及rating
和type
的单独组合中的项目计数,并使用这些值来计算条件概率。 / p>
from pyspark.sql import functions as F
ratings_cols = ["company","model","rating","type"]
ratings_values = [
("ford","mustang","A","coupe"),("chevy","camaro","B",("ford","fiesta","C","sedan"),"focus","taurus",("toyota","camry",]
ratings_df = spark.createDataFrame(data=ratings_values,schema=ratings_cols)
ratings_df.show()
# +-------+-------+------+-----+
# |company| model|rating| type|
# +-------+-------+------+-----+
# | ford|mustang| A|coupe|
# | chevy| camaro| B|coupe|
# | ford| fiesta| C|sedan|
# | ford| focus| A|sedan|
# | ford| taurus| B|sedan|
# | toyota| camry| B|sedan|
# +-------+-------+------+-----+
probability_df = (ratings_df.groupby(["rating","type"])
.agg(F.count(F.lit(1)).alias("rating_type_count"))
.join(ratings_df.groupby("rating").agg(F.count(F.lit(1)).alias("rating_count")),on="rating")
.withColumn("conditional_probability",F.round(F.col("rating_type_count")/F.col("rating_count"),2))
.select(["rating","type","conditional_probability"])
.sort(["type","rating"]))
probability_df.show()
# +------+-----+-----------------------+
# |rating| type|conditional_probability|
# +------+-----+-----------------------+
# | A|coupe| 0.5|
# | B|coupe| 0.33|
# | A|sedan| 0.5|
# | B|sedan| 0.67|
# | C|sedan| 1.0|
# +------+-----+-----------------------+
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。