将特殊情况习惯添加到Python Vader情感中

我一直在使用Vader Sentiment进行一些文本情绪分析,我注意到我的数据有很多“路要走”的短语被错误地归类为中性:

In[11]: sentiment('way to go John')
Out[11]: {'compound': 0.0,'neg': 0.0,'neu': 1.0,'pos': 0.0}

在深入了解Vader源代码后,我找到了以下字典:

# check for special case idioms using a sentiment-laden keyword kNown to SAGE
SPECIAL_CASE_IdioMS = {"the shit": 3,"the bomb": 3,"bad ass": 1.5,"yeah right": -2,"cut the mustard": 2,"kiss of death": -1.5,"hand to mouth": -2,"way to go": 3}

如您所见,我手动添加了“方式去”条目.但是,似乎没有效果

In [12]: sentiment('way to go John')
Out[12]: {'compound': 0.0,'pos': 0.0}

知道我错过了什么吗?或者更具体地说,我需要做些什么来添加自定义习语?这是Vader Sentiment源代码

#######################################################################################################################
# SENTIMENT SCORING SCRIPT
#######################################################################################################################
'''
Created on July 04,2013
@author: C.J. Hutto
  Hutto,C.J. & Gilbert,E.E. (2014). VADER: A ParsimonIoUs Rule-based Model for 
  Sentiment Analysis of Social Media Text. Eighth International Conference on 
  Weblogs and Social Media (ICWSM-14). Ann Arbor,MI,June 2014.
'''

import os,math,re,sys,fnmatch,string 
reload(sys)

f = 'C:\\Users\\jamacwan\\Code\\Python\\Twitter API\\Sentiment Analysis\\vader_sentiment_lexicon.txt' 

def make_lex_dict(f):
    return dict(map(lambda (w,m): (w,float(m)),[wmsr.strip().split('\t')[0:2] for wmsr in open(f) ]))

WORD_valence_DICT = make_lex_dict(f)

# empirically derived valence ratings for words,emoticons,slang,swear words,acronyms/initialisms 


##CONSTANTS#####

#(empirically derived mean sentiment intensity rating increase for booster words)
B_INCR = 0.293
B_DECR = -0.293

#(empirically derived mean sentiment intensity rating increase for using ALLCAPs to emphasize a word)
c_INCR = 0.733

# for removing punctuation
REGEX_REMOVE_PUNCTUATION = re.compile('[%s]' % re.escape(string.punctuation))

PUNC_LIST = [".","!","?",",";",":","-","'","\"","!!","!!!","??","???","?!?","!?!","?!?!","!?!?"]

NEGATE = ["aint","arent","cannot","cant","Couldnt","darent","didnt","doesnt","ain't","aren't","can't","Couldn't","daren't","didn't","doesn't","dont","hadnt","hasnt","havent","isnt","mightnt","mustnt","neither","don't","hadn't","hasn't","haven't","isn't","mightn't","mustn't","neednt","needn't","never","none","nope","nor","not","nothing","Nowhere","oughtnt","shant","shouldnt","uhuh","wasnt","werent","oughtn't","shan't","shouldn't","uh-uh","wasn't","weren't","without","wont","wouldnt","won't","wouldn't","rarely","seldom","despite"]

# booster/dampener 'intensifiers' or 'degree adverbs' http://en.wiktionary.org/wiki/Category:English_degree_adverbs

BOOSTER_DICT = {"absolutely": B_INCR,"amazingly": B_INCR,"awfully": B_INCR,"completely": B_INCR,"considerably": B_INCR,"decidedly": B_INCR,"deeply": B_INCR,"effing": B_INCR,"enormously": B_INCR,"entirely": B_INCR,"especially": B_INCR,"exceptionally": B_INCR,"extremely": B_INCR,"fabulously": B_INCR,"flipping": B_INCR,"flippin": B_INCR,"fricking": B_INCR,"frickin": B_INCR,"frigging": B_INCR,"friggin": B_INCR,"fully": B_INCR,"fucking": B_INCR,"greatly": B_INCR,"hella": B_INCR,"highly": B_INCR,"hugely": B_INCR,"incredibly": B_INCR,"intensely": B_INCR,"majorly": B_INCR,"more": B_INCR,"most": B_INCR,"particularly": B_INCR,"purely": B_INCR,"quite": B_INCR,"really": B_INCR,"remarkably": B_INCR,"so": B_INCR,"substantially": B_INCR,"thoroughly": B_INCR,"totally": B_INCR,"tremendously": B_INCR,"uber": B_INCR,"unbelievably": B_INCR,"unusually": B_INCR,"utterly": B_INCR,"very": B_INCR,"almost": B_DECR,"barely": B_DECR,"hardly": B_DECR,"just enough": B_DECR,"kind of": B_DECR,"kinda": B_DECR,"kindof": B_DECR,"kind-of": B_DECR,"less": B_DECR,"little": B_DECR,"marginally": B_DECR,"occasionally": B_DECR,"partly": B_DECR,"scarcely": B_DECR,"slightly": B_DECR,"somewhat": B_DECR,"sort of": B_DECR,"sorta": B_DECR,"sortof": B_DECR,"sort-of": B_DECR}

# check for special case idioms using a sentiment-laden keyword kNown to SAGE
SPECIAL_CASE_IdioMS = {"the shit": 3,"way to go": 6}

def negated(list,nWords=[],includeNT=True):
    nWords.extend(NEGATE)
    for word in nWords:
        if word in list:
            return True
    if includeNT:
        for word in list:
            if "n't" in word:
                return True
    if "least" in list:
        i = list.index("least")
        if i > 0 and list[i-1] != "at":
            return True
    return False

def normalize(score,alpha=15):
    # normalize the score to be between -1 and 1 using an alpha that approximates the max expected value
    normscore = score/math.sqrt( ((score*score) + alpha) )
    return normscore

def wildCardMatch(patternWithWildcard,listofStringsToMatchAgainst):
    listofMatches = fnmatch.filter(listofStringsToMatchAgainst,patternWithWildcard)
    return listofMatches


def isALLCAP_differential(wordList):
    countALLCAPS= 0
    for w in wordList:
        if w.isupper():
            countALLCAPS += 1
    cap_differential = len(wordList) - countALLCAPS
    if cap_differential > 0 and cap_differential < len(wordList):
        isDiff = True
    else: isDiff = False
    return isDiff

#check if the preceding words increase,decrease,or negate/nullify the valence
def scalar_inc_dec(word,valence,isCap_diff):
    scalar = 0.0
    word_lower = word.lower()
    if word_lower in BOOSTER_DICT:
        scalar = BOOSTER_DICT[word_lower]
        if valence < 0: scalar *= -1
        #check if booster/dampener word is in ALLCAPS (while others aren't)
        if word.isupper() and isCap_diff:
            if valence > 0: scalar += c_INCR
            else:  scalar -= c_INCR
    return scalar

def sentiment(text):
    """
    Returns a float for sentiment strength based on the input text.
    Positive values are positive valence,negative value are negative valence.
    """
    if not isinstance(text,unicode) and not isinstance(text,str):
        text = str(text)

    wordsAndEmoticons = text.split() #doesn't separate words from adjacent punctuation (keeps emoticons & contractions)
    text_mod = REGEX_REMOVE_PUNCTUATION.sub('',text) # removes punctuation (but loses emoticons & contractions)
    wordsOnly = text_mod.split()
    # get rid of empty items or single letter "words" like 'a' and 'I' from wordsOnly
    for word in wordsOnly:
        if len(word) <= 1:
            wordsOnly.remove(word)    
    # Now remove adjacent & redundant punctuation from [wordsAndEmoticons] while keeping emoticons and contractions

    for word in wordsOnly:
        for p in PUNC_LIST:
            pword = p + word
            x1 = wordsAndEmoticons.count(pword)
            while x1 > 0:
                i = wordsAndEmoticons.index(pword)
                wordsAndEmoticons.remove(pword)
                wordsAndEmoticons.insert(i,word)
                x1 = wordsAndEmoticons.count(pword)

            wordp = word + p
            x2 = wordsAndEmoticons.count(wordp)
            while x2 > 0:
                i = wordsAndEmoticons.index(wordp)
                wordsAndEmoticons.remove(wordp)
                wordsAndEmoticons.insert(i,word)
                x2 = wordsAndEmoticons.count(wordp)

    # get rid of residual empty items or single letter "words" like 'a' and 'I' from wordsAndEmoticons
    for word in wordsAndEmoticons:
        if len(word) <= 1:
            wordsAndEmoticons.remove(word)

    # remove stopwords from [wordsAndEmoticons]
    #stopwords = [str(word).strip() for word in open('stopwords.txt')]
    #for word in wordsAndEmoticons:
    #    if word in stopwords:
    #        wordsAndEmoticons.remove(word)

    # check for negation

    isCap_diff = isALLCAP_differential(wordsAndEmoticons)

    sentiments = []
    for item in wordsAndEmoticons:
        v = 0
        i = wordsAndEmoticons.index(item)
        if (i < len(wordsAndEmoticons)-1 and item.lower() == "kind" and \
           wordsAndEmoticons[i+1].lower() == "of") or item.lower() in BOOSTER_DICT:
            sentiments.append(v)
            continue
        item_lowercase = item.lower()
        if item_lowercase in WORD_valence_DICT:
            #get the sentiment valence
            v = float(WORD_valence_DICT[item_lowercase])

            #check if sentiment laden word is in ALLCAPS (while others aren't)

            if item.isupper() and isCap_diff:
                if v > 0: v += c_INCR
                else: v -= c_INCR


            n_scalar = -0.74
            if i > 0 and wordsAndEmoticons[i-1].lower() not in WORD_valence_DICT:
                s1 = scalar_inc_dec(wordsAndEmoticons[i-1],v,isCap_diff)
                v = v+s1
                if negated([wordsAndEmoticons[i-1]]): v = v*n_scalar
            if i > 1 and wordsAndEmoticons[i-2].lower() not in WORD_valence_DICT:
                s2 = scalar_inc_dec(wordsAndEmoticons[i-2],isCap_diff)
                if s2 != 0: s2 = s2*0.95
                v = v+s2
                # check for special use of 'never' as valence modifier instead of negation
                if wordsAndEmoticons[i-2] == "never" and (wordsAndEmoticons[i-1] == "so" or wordsAndEmoticons[i-1] == "this"): 
                    v = v*1.5                    
                # otherwise,check for negation/nullification
                elif negated([wordsAndEmoticons[i-2]]): v = v*n_scalar
            if i > 2 and wordsAndEmoticons[i-3].lower() not in WORD_valence_DICT:
                s3 = scalar_inc_dec(wordsAndEmoticons[i-3],isCap_diff)
                if s3 != 0: s3 = s3*0.9
                v = v+s3
                # check for special use of 'never' as valence modifier instead of negation
                if wordsAndEmoticons[i-3] == "never" and \
                   (wordsAndEmoticons[i-2] == "so" or wordsAndEmoticons[i-2] == "this") or \
                   (wordsAndEmoticons[i-1] == "so" or wordsAndEmoticons[i-1] == "this"):
                    v = v*1.25
                # otherwise,check for negation/nullification
                elif negated([wordsAndEmoticons[i-3]]): v = v*n_scalar


                # future work: consider other sentiment-laden idioms
                #other_idioms = {"back handed": -2,"blow smoke": -2,"blowing smoke": -2,"upper hand": 1,"break a leg": 2,#                "cooking with gas": 2,"in the black": 2,"in the red": -2,"on the ball": 2,"under the weather": -2}

                onezero = u"{} {}".format(wordsAndEmoticons[i-1],wordsAndEmoticons[i])
                twoonezero = u"{} {} {}".format(wordsAndEmoticons[i-2],wordsAndEmoticons[i-1],wordsAndEmoticons[i])
                twoone = u"{} {}".format(wordsAndEmoticons[i-2],wordsAndEmoticons[i-1])
                threetwoone = u"{} {} {}".format(wordsAndEmoticons[i-3],wordsAndEmoticons[i-2],wordsAndEmoticons[i-1])
                threetwo = u"{} {}".format(wordsAndEmoticons[i-3],wordsAndEmoticons[i-2])
                if onezero in SPECIAL_CASE_IdioMS:
                    v = SPECIAL_CASE_IdioMS[onezero]
                elif twoonezero in SPECIAL_CASE_IdioMS:
                    v = SPECIAL_CASE_IdioMS[twoonezero]
                elif twoone in SPECIAL_CASE_IdioMS:
                    v = SPECIAL_CASE_IdioMS[twoone]
                elif threetwoone in SPECIAL_CASE_IdioMS:
                    v = SPECIAL_CASE_IdioMS[threetwoone]
                elif threetwo in SPECIAL_CASE_IdioMS:
                    v = SPECIAL_CASE_IdioMS[threetwo]
                if len(wordsAndEmoticons)-1 > i:
                    zeroone = u"{} {}".format(wordsAndEmoticons[i],wordsAndEmoticons[i+1])
                    if zeroone in SPECIAL_CASE_IdioMS:
                        v = SPECIAL_CASE_IdioMS[zeroone]
                if len(wordsAndEmoticons)-1 > i+1:
                    zeroonetwo = u"{} {}".format(wordsAndEmoticons[i],wordsAndEmoticons[i+1],wordsAndEmoticons[i+2])
                    if zeroonetwo in SPECIAL_CASE_IdioMS:
                        v = SPECIAL_CASE_IdioMS[zeroonetwo]

                # check for booster/dampener bi-grams such as 'sort of' or 'kind of'
                if threetwo in BOOSTER_DICT or twoone in BOOSTER_DICT:
                    v = v+B_DECR

            # check for negation case using "least"
            if i > 1 and wordsAndEmoticons[i-1].lower() not in WORD_valence_DICT \
                and wordsAndEmoticons[i-1].lower() == "least":
                if (wordsAndEmoticons[i-2].lower() != "at" and wordsAndEmoticons[i-2].lower() != "very"):
                    v = v*n_scalar
            elif i > 0 and wordsAndEmoticons[i-1].lower() not in WORD_valence_DICT \
                and wordsAndEmoticons[i-1].lower() == "least":
                v = v*n_scalar
        sentiments.append(v) 

    # check for modification in sentiment due to contrastive conjunction 'but'
    if 'but' in wordsAndEmoticons or 'BUT' in wordsAndEmoticons:
        try: bi = wordsAndEmoticons.index('but')
        except: bi = wordsAndEmoticons.index('BUT')
        for s in sentiments:
            si = sentiments.index(s)
            if si < bi: 
                sentiments.pop(si)
                sentiments.insert(si,s*0.5)
            elif si > bi: 
                sentiments.pop(si)
                sentiments.insert(si,s*1.5) 

    if sentiments:                      
        sum_s = float(sum(sentiments))
        #print sentiments,sum_s

        # check for added emphasis resulting from exclamation points (up to 4 of them)
        ep_count = text.count("!")
        if ep_count > 4: ep_count = 4
        ep_amplifier = ep_count*0.292 #(empirically derived mean sentiment intensity rating increase for exclamation points)
        if sum_s > 0:  sum_s += ep_amplifier
        elif  sum_s < 0: sum_s -= ep_amplifier

        # check for added emphasis resulting from question marks (2 or 3+)
        qm_count = text.count("?")
        qm_amplifier = 0
        if qm_count > 1:
            if qm_count <= 3: qm_amplifier = qm_count*0.18
            else: qm_amplifier = 0.96
            if sum_s > 0:  sum_s += qm_amplifier
            elif  sum_s < 0: sum_s -= qm_amplifier

        compound = normalize(sum_s)

        # want separate positive versus negative sentiment scores
        pos_sum = 0.0
        neg_sum = 0.0
        neu_count = 0
        for sentiment_score in sentiments:
            if sentiment_score > 0:
                pos_sum += (float(sentiment_score) +1) # compensates for neutral words that are counted as 1
            if sentiment_score < 0:
                neg_sum += (float(sentiment_score) -1) # when used with math.fabs(),compensates for neutrals
            if sentiment_score == 0:
                neu_count += 1

        if pos_sum > math.fabs(neg_sum): pos_sum += (ep_amplifier+qm_amplifier)
        elif pos_sum < math.fabs(neg_sum): neg_sum -= (ep_amplifier+qm_amplifier)

        total = pos_sum + math.fabs(neg_sum) + neu_count
        pos = math.fabs(pos_sum / total)
        neg = math.fabs(neg_sum / total)
        neu = math.fabs(neu_count / total)

    else:
        compound = 0.0; pos = 0.0; neg = 0.0; neu = 0.0

    s = {"neg" : round(neg,3),"neu" : round(neu,"pos" : round(pos,"compound" : round(compound,4)}
    return s


if __name__ == '__main__':
    # --- examples -------
    sentences = [
                u"VADER is smart,handsome,and funny.",# positive sentence example
                u"VADER is smart,and funny!",# punctuation emphasis handled correctly (sentiment intensity adjusted)
                u"VADER is very smart,# booster words handled correctly (sentiment intensity adjusted)
                u"VADER is VERY SMART,and FUNNY.",# emphasis for ALLCAPS handled
                u"VADER is VERY SMART,and FUNNY!!!",# combination of signals - VADER appropriately adjusts intensity
                u"VADER is VERY SMART,really handsome,and INCREDIBLY FUNNY!!!",# booster words & punctuation make this close to ceiling for score
                u"The book was good.",# positive sentence
                u"The book was kind of good.",# qualified positive sentence is handled correctly (intensity adjusted)
                u"The plot was good,but the characters are uncompelling and the dialog is not great.",# mixed negation sentence
                u"A really bad,horrible book.",# negative sentence with booster words
                u"At least it isn't a horrible book.",# negated negative sentence with contraction
                u":) and :D",# emoticons handled
                u"",# an empty string is correctly handled
                u"Today sux",#  negative slang handled
                u"Today sux!",#  negative slang with punctuation emphasis handled
                u"Today SUX!",#  negative slang with capitalization emphasis
                u"Today kinda sux! But I'll get by,lol" # mixed sentiment example with slang and constrastive conjunction "but"
                 ]
    paragraph = "It was one of the worst movies I've seen,despite good reviews. \
    Unbelievably bad acting!! Poor direction. VERY poor production. \
    The movie was bad. Very bad movie. VERY bad movie. VERY BAD movie. VERY BAD movie!"

    from nltk import tokenize
    lines_list = tokenize.sent_tokenize(paragraph)
    sentences.extend(lines_list)

    tricky_sentences = [
                        "Most automated sentiment analysis tools are shit.","VADER sentiment analysis is the shit.","Sentiment analysis has never been good.","Sentiment analysis with VADER has never been this good.","Warren Beatty has never been so entertaining.","I won't say that the movie is astounding and I wouldn't claim that the movie is too banal either.","I like to hate Michael Bay films,but I Couldn't fault this one","It's one thing to watch an Uwe Boll film,but another thing entirely to pay for it","The movie was too good","This movie was actually neither that funny,nor super witty.","This movie doesn't care about cLeverness,wit or any other kind of intelligent humor.","Those who find ugly meanings in beautiful things are corrupt without being charming.","There are slow and repetitive parts,BUT it has just enough spice to keep it interesting.","The script is not fantastic,but the acting is decent and the cinematography is EXCELLENT!","Roger Dodger is one of the most compelling variations on this theme.","Roger Dodger is one of the least compelling variations on this theme.","Roger Dodger is at least compelling as a variation on the theme.","they fall in love with the product","but then it breaks","usually around the time the 90 day warranty expires","the twin towers collapsed today","However,Mr. Carter solemnly argues,his client carried out the kidnapping under orders and in the ''least offensive way possible.''"
                        ]
    sentences.extend(tricky_sentences)
    for sentence in sentences:
        print sentence
        ss = sentiment(sentence)
        print "\t" + str(ss)

    print "\n\n Done!"
最佳答案
代码有几个问题:

>特殊情况仅适用于vader_sentiment_lexicon.txt中的单词,因为:

if item_lowercase in WORD_valence_DICT:
    #get the sentiment valence
    ...
    if onezero in SPECIAL_CASE_IdioMS:
        v = SPECIAL_CASE_IdioMS[onezero]
...

如果你改变你的短语以包含这样的单词,例如’abandon’,那么这传递确定.
怎么修:

if item_lowercase in WORD_valence_DICT:
    #get the sentiment valence
    v = float(WORD_valence_DICT[item_lowercase])
else:
    v = 0
#move next statements out of if
#check if sentiment laden word is in ALLCAPS (while others aren't)
if item.isupper() and isCap_diff:
        if v > 0: v += c_INCR
        else: v -= c_INCR

加上一些修复内部.
>仅当特殊字具有至少第3个位置(索引> 2)时才检查特殊情况.

if i > 0 and wordsAndEmoticons[i-1].lower() not in WORD_valence_DICT:
    ... # no SPECIAL_CASE_IdioMS
if i > 1 and wordsAndEmoticons[i-2].lower() not in WORD_valence_DICT:
    ... # no SPECIAL_CASE_IdioMS
if i > 2 and wordsAndEmoticons[i-3].lower() not in WORD_valence_DICT:
    ...
    twoonezero = u"{} {} {}".format(wordsAndEmoticons[i-2],wordsAndEmoticons[i])
    ...
    elif twoonezero in SPECIAL_CASE_IdioMS: ...

这里的短语方式放弃约翰字放弃有索引2,但没有这种情况.如果我们改变短语给你放弃约翰的方式,那么它就开始工作了.
如何修复:将SPECIAL案例移动到一个分支.或者更好地使用特殊情况的实际长度,而不是尝试硬编码.

简历:代码不容易支持.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


我最近重新拾起了计算机视觉,借助Python的opencv还有face_recognition库写了个简单的图像识别demo,额外定制了一些内容,原本想打包成exe然后发给朋友,不过在这当中遇到了许多小问题,都解决了,记录一下踩过的坑。 1、Pyinstaller打包过程当中出现warning,跟d
说到Pooling,相信学习过CNN的朋友们都不会感到陌生。Pooling在中文当中的意思是“池化”,在神经网络当中非常常见,通常用的比较多的一种是Max Pooling,具体操作如下图: 结合图像理解,相信你也会大概明白其中的本意。不过Pooling并不是只可以选取2x2的窗口大小,即便是3x3,
记得大一学Python的时候,有一个题目是判断一个数是否是复数。当时觉得比较复杂不好写,就琢磨了一个偷懒的好办法,用异常处理的手段便可以大大程度帮助你简短代码(偷懒)。以下是判断整数和复数的两段小代码: 相信看到这里,你也有所顿悟,能拓展出更多有意思的方法~
文章目录 3 直方图Histogramplot1. 基本直方图的绘制 Basic histogram2. 数据分布与密度信息显示 Control rug and density on seaborn histogram3. 带箱形图的直方图 Histogram with a boxplot on t
文章目录 5 小提琴图Violinplot1. 基础小提琴图绘制 Basic violinplot2. 小提琴图样式自定义 Custom seaborn violinplot3. 小提琴图颜色自定义 Control color of seaborn violinplot4. 分组小提琴图 Group
文章目录 4 核密度图Densityplot1. 基础核密度图绘制 Basic density plot2. 核密度图的区间控制 Control bandwidth of density plot3. 多个变量的核密度图绘制 Density plot of several variables4. 边
首先 import tensorflow as tf tf.argmax(tenso,n)函数会返回tensor中参数指定的维度中的最大值的索引或者向量。当tensor为矩阵返回向量,tensor为向量返回索引号。其中n表示具体参数的维度。 以实际例子为说明: import tensorflow a
seaborn学习笔记章节 seaborn是一个基于matplotlib的Python数据可视化库。seaborn是matplotlib的高级封装,可以绘制有吸引力且信息丰富的统计图形。相对于matplotlib,seaborn语法更简洁,两者关系类似于numpy和pandas之间的关系,seabo
Python ConfigParser教程显示了如何使用ConfigParser在Python中使用配置文件。 文章目录 1 介绍1.1 Python ConfigParser读取文件1.2 Python ConfigParser中的节1.3 Python ConfigParser从字符串中读取数据
1. 处理Excel 电子表格笔记(第12章)(代码下载) 本文主要介绍openpyxl 的2.5.12版处理excel电子表格,原书是2.1.4 版,OpenPyXL 团队会经常发布新版本。不过不用担心,新版本应该在相当长的时间内向后兼容。如果你有新版本,想看看它提供了什么新功能,可以查看Open
1. 发送电子邮件和短信笔记(第16章)(代码下载) 1.1 发送电子邮件 简单邮件传输协议(SMTP)是用于发送电子邮件的协议。SMTP 规定电子邮件应该如何格式化、加密、在邮件服务器之间传递,以及在你点击发送后,计算机要处理的所有其他细节。。但是,你并不需要知道这些技术细节,因为Python 的
文章目录 12 绘图实例(4) Drawing example(4)1. Scatterplot with varying point sizes and hues(relplot)2. Scatterplot with categorical variables(swarmplot)3. Scat
文章目录 10 绘图实例(2) Drawing example(2)1. Grouped violinplots with split violins(violinplot)2. Annotated heatmaps(heatmap)3. Hexbin plot with marginal dist
文章目录 9 绘图实例(1) Drawing example(1)1. Anscombe’s quartet(lmplot)2. Color palette choices(barplot)3. Different cubehelix palettes(kdeplot)4. Distribution
Python装饰器教程展示了如何在Python中使用装饰器基本功能。 文章目录 1 使用教程1.1 Python装饰器简单示例1.2 带@符号的Python装饰器1.3 用参数修饰函数1.4 Python装饰器修改数据1.5 Python多层装饰器1.6 Python装饰器计时示例 2 参考 1 使
1. 用GUI 自动化控制键盘和鼠标第18章 (代码下载) pyautogui模块可以向Windows、OS X 和Linux 发送虚拟按键和鼠标点击。根据使用的操作系统,在安装pyautogui之前,可能需要安装一些其他模块。 Windows: 不需要安装其他模块。OS X: sudo pip3
文章目录 生成文件目录结构多图合并找出文件夹中相似图像 生成文件目录结构 生成文件夹或文件的目录结构,并保存结果。可选是否滤除目录,特定文件以及可以设定最大查找文件结构深度。效果如下: root:[z:/] |--a.py |--image | |--cat1.jpg | |--cat2.jpg |
文章目录 VENN DIAGRAM(维恩图)1. 具有2个分组的基本的维恩图 Venn diagram with 2 groups2. 具有3个组的基本维恩图 Venn diagram with 3 groups3. 自定义维恩图 Custom Venn diagram4. 精致的维恩图 Elabo
mxnet60分钟入门Gluon教程代码下载,适合做过深度学习的人使用。入门教程地址: https://beta.mxnet.io/guide/getting-started/crash-course/index.html mxnet安装方法:pip install mxnet 1 在mxnet中使
文章目录 1 安装2 快速入门2.1 基本用法2.2 输出图像格式2.3 图像style设置2.4 属性2.5 子图和聚类 3 实例4 如何进一步使用python graphviz Graphviz是一款能够自动排版的流程图绘图软件。python graphviz则是graphviz的python实