微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Python统计纯文本文件中英文单词出现个数的方法总结【测试可用】

本文实例讲述了Python统计纯文本文件中英文单词出现个数的方法分享给大家供大家参考,具体如下:

第一版: 效率低

# -*- coding:utf-8 -*-
#!python3
path = 'test.txt'
with open(path,encoding='utf-8',newline='') as f:
  word = []
  words_dict= {}
  for letter in f.read():
    if letter.isalnum():
      word.append(letter)
    elif letter.isspace(): #空白字符 空格 \t \n
      if word:
        word = ''.join(word).lower() #转小写
        if word not in words_dict:
          words_dict[word] = 1
        else:
          words_dict[word] += 1
        word = []
#处理最后一个单词
if word:
  word = ''.join(word).lower() # 转小写
  if word not in words_dict:
    words_dict[word] = 1
  else:
    words_dict[word] += 1
  word = []
for k,v in words_dict.items():
  print(k,v)

运行结果:

we 4
are 1
busy 1
all 1
day 1
like 1
swarms 1
of 6
flies 1
without 1
souls 1
noisy 1
restless 1
unable 1
to 1
hear 1
the 7
voices 1
soul 1
as 1
time 1
goes 1
by 1
childhood 1
away 2
grew 1
up 1
years 1
a 1
lot 1
memories 1
once 1
have 2
also 1
eroded 1
bottom 1
childish 1
innocence 1
regardless 1
shackles 1
mind 1
indulge 1
in 1
world 1
buckish 1
focus 1
on 1
beneficial 1
principle 1
lost 1
themselves 1

第二版:

缺点:遇到大文件要一次读入内存,性能不好

# -*- coding:utf-8 -*-
#!python3
import re
path = 'test.txt'
with open(path,'r',encoding='utf-8') as f:
  data = f.read()
  word_reg = re.compile(r'\w+')
  #word_reg = re.compile(r'\w+\b')
  word_list = word_reg.findall(data)
  word_list = [word.lower() for word in word_list] #转小写
  word_set = set(word_list) #避免重复查询
  # words_dict = {}
  # for word in word_set:
  #   words_dict[word] = word_list.count(word)
  # 简洁写法
  words_dict = {word: word_list.count(word) for word in word_set}
  for k,v in words_dict.items():
    print(k,v)

运行结果:

on 1
also 1
souls 1
focus 1
soul 1
time 1
noisy 1
grew 1
lot 1
childish 1
like 1
voices 1
indulge 1
swarms 1
buckish 1
restless 1
we 4
hear 1
childhood 1
as 1
world 1
themselves 1
are 1
bottom 1
memories 1
the 7
of 6
flies 1
without 1
have 2
day 1
busy 1
to 1
eroded 1
regardless 1
unable 1
innocence 1
up 1
a 1
in 1
mind 1
goes 1
by 1
lost 1
principle 1
once 1
away 2
years 1
beneficial 1
all 1
shackles 1

第三版:

# -*- coding:utf-8 -*-
#!python3
import re
path = 'test.txt'
with open(path,encoding='utf-8') as f:
  word_list = []
  word_reg = re.compile(r'\w+')
  for line in f:
    #line_words = word_reg.findall(line)
    #比上面的正则更加简单
    line_words = line.split()
    word_list.extend(line_words)
  word_set = set(word_list) # 避免重复查询
  words_dict = {word: word_list.count(word) for word in word_set}
  for k,v)

运行结果:

childhood 1
innocence,1
are 1
of 6
also 1
lost 1
We 1
regardless 1
noisy,1
by,1
on 1
themselves. 1
grew 1
lot 1
bottom 1
buckish,1
time 1
childish 1
voices 1
once 1
restless,1
shackles 1
world 1
eroded 1
As 1
all 1
day,1
swarms 1
we 3
soul. 1
memories,1
in 1
without 1
like 1
beneficial 1
up,1
unable 1
away 1
flies 1
goes 1
a 1
have 2
away,1
mind,1
focus 1
principle,1
hear 1
to 1
the 7
years 1
busy 1
souls,1
indulge 1

第四版:使用Counter统计

# -*- coding:utf-8 -*-
#!python3
import collections
import re
path = 'test.txt'
with open(path,encoding='utf-8') as f:
  word_list = []
  word_reg = re.compile(r'\w+')
  for line in f:
    line_words = line.split()
    word_list.extend(line_words)
  words_dict = dict(collections.Counter(word_list)) #使用Counter统计
  for k,v)

运行结果:

We 1
are 1
busy 1
all 1
day,1
like 1
swarms 1
of 6
flies 1
without 1
souls,1
noisy,1
restless,1
unable 1
to 1
hear 1
the 7
voices 1
soul. 1
As 1
time 1
goes 1
by,1
childhood 1
away,1
we 3
grew 1
up,1
years 1
away 1
a 1
lot 1
memories,1
once 1
have 2
also 1
eroded 1
bottom 1
childish 1
innocence,1
regardless 1
shackles 1
mind,1
indulge 1
in 1
world 1
buckish,1
focus 1
on 1
beneficial 1
principle,1
lost 1
themselves. 1

注:这里使用的测试文本test.txt如下:

We are busy all day,like swarms of flies without souls,noisy,restless,unable to hear the voices of the soul. As time goes by,childhood away,we grew up,years away a lot of memories,once have also eroded the bottom of the childish innocence,we regardless of the shackles of mind,indulge in the world buckish,focus on the beneficial principle,we have lost themselves.

PS:这里再为大家推荐2款相关统计工具供大家参考:

在线字数统计工具:
http://tools.jb51.net/code/zishutongji

在线字符统计与编辑工具:
http://tools.jb51.net/code/char_tongji

更多关于Python相关内容感兴趣的读者可查看本站专题:《Python文件与目录操作技巧汇总》、《Python文本文件操作技巧汇总》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》及《Python入门与进阶经典教程

希望本文所述对大家Python程序设计有所帮助。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


我最近重新拾起了计算机视觉,借助Python的opencv还有face_recognition库写了个简单的图像识别demo,额外定制了一些内容,原本想打包成exe然后发给朋友,不过在这当中遇到了许多小问题,都解决了,记录一下踩过的坑。 1、Pyinstaller打包过程当中出现warning,跟d
说到Pooling,相信学习过CNN的朋友们都不会感到陌生。Pooling在中文当中的意思是“池化”,在神经网络当中非常常见,通常用的比较多的一种是Max Pooling,具体操作如下图: 结合图像理解,相信你也会大概明白其中的本意。不过Pooling并不是只可以选取2x2的窗口大小,即便是3x3,
记得大一学Python的时候,有一个题目是判断一个数是否是复数。当时觉得比较复杂不好写,就琢磨了一个偷懒的好办法,用异常处理的手段便可以大大程度帮助你简短代码(偷懒)。以下是判断整数和复数的两段小代码: 相信看到这里,你也有所顿悟,能拓展出更多有意思的方法~
文章目录 3 直方图Histogramplot1. 基本直方图的绘制 Basic histogram2. 数据分布与密度信息显示 Control rug and density on seaborn histogram3. 带箱形图的直方图 Histogram with a boxplot on t
文章目录 5 小提琴图Violinplot1. 基础小提琴图绘制 Basic violinplot2. 小提琴图样式自定义 Custom seaborn violinplot3. 小提琴图颜色自定义 Control color of seaborn violinplot4. 分组小提琴图 Group
文章目录 4 核密度图Densityplot1. 基础核密度图绘制 Basic density plot2. 核密度图的区间控制 Control bandwidth of density plot3. 多个变量的核密度图绘制 Density plot of several variables4. 边
首先 import tensorflow as tf tf.argmax(tenso,n)函数会返回tensor中参数指定的维度中的最大值的索引或者向量。当tensor为矩阵返回向量,tensor为向量返回索引号。其中n表示具体参数的维度。 以实际例子为说明: import tensorflow a
seaborn学习笔记章节 seaborn是一个基于matplotlib的Python数据可视化库。seaborn是matplotlib的高级封装,可以绘制有吸引力且信息丰富的统计图形。相对于matplotlib,seaborn语法更简洁,两者关系类似于numpy和pandas之间的关系,seabo
Python ConfigParser教程显示了如何使用ConfigParser在Python中使用配置文件。 文章目录 1 介绍1.1 Python ConfigParser读取文件1.2 Python ConfigParser中的节1.3 Python ConfigParser从字符串中读取数据
1. 处理Excel 电子表格笔记(第12章)(代码下载) 本文主要介绍openpyxl 的2.5.12版处理excel电子表格,原书是2.1.4 版,OpenPyXL 团队会经常发布新版本。不过不用担心,新版本应该在相当长的时间内向后兼容。如果你有新版本,想看看它提供了什么新功能,可以查看Open