统计词频

from collections import Counter
colors = ['red', 'blue', 'red', 'green', 'blue', 'blue']
c = Counter(colors)
print (dict(c))

Counter操作

可以创建一个空的Counter:

cnt = Counter()
1

之后在空的Counter上进行一些操作。
也可以创建的时候传进去一个迭代器(数组,字符串,字典等):

c = Counter('gallahad')                 # 传进字符串
c = Counter({'red': 4, 'blue': 2})      # 传进字典
c = Counter(cats=4, dogs=8)             # 传进元组
123

判断是否包含某元素,可以转化为dict然后通过dict判断,Counter也带有函数可以判断:

c = Counter(['eggs', 'ham'])
c['bacon']                              # 不存在就返回0
#0
123

删除元素:

c['sausage'] = 0                        # counter entry with a zero count
del c['sausage']   
12

获得所有元素:

c = Counter(a=4, b=2, c=0, d=-2)
list(c.elements())
#['a', 'a', 'a', 'a', 'b', 'b']
123

查看最常见出现的k个元素:

Counter('abracadabra').most_common(3)
#[('a', 5), ('r', 2), ('b', 2)]
12

Counter更新:

c = Counter(a=3, b=1)
d = Counter(a=1, b=2)
c + d                       # 相加
#Counter({'a': 4, 'b': 3})
c - d                       # 相减,如果小于等于0,删去
#Counter({'a': 2})
c & d                       # 求最小
#Counter({'a': 1, 'b': 1})
c | d                       # 求最大
#Counter({'a': 3, 'b': 2})
12345678910

例子

例子:读文件统计词频并按照出现次数排序,文件是以空格隔开的单词的诸多句子:

from collections import Counter
lines = open("./data/input.txt","r").read().splitlines()
lines = [lines[i].split(" ") for i in range(len(lines))]
words = []
for line in lines:
    words.extend(line)
result = Counter(words)
print (result.most_common(10))
12345678

当需要统计的文件比较大,使用read()一次读不完的情况:

from collections import Counter
result = Counter()
with open("./data/input.txt","r") as f:
    while True:
        lines = f.read(1024).splitlines()
        if lines==[]:
            break
        lines = [lines[i].split(" ") for i in range(len(lines))]
        words = []
        for line in lines:
            words.extend(line)
        tmp = Counter(words)
        result+=tmp

print (result.most_common(10))

转自:https://blog.csdn.net/qwe1257/article/details/83272340

Last modification:February 4th, 2021 at 03:56 pm
如果觉得我的文章对你有用,请随意赞赏