使程序运行得更快

问题描述 投票:-2回答:5

我编写了一个程序来检查文本文档中的诅咒词。我将文档转换为单词列表,并将每个单词传递给网站,以检查它是否是一个诅咒词。问题是如果文字太大,它运行得很慢。如何让它更快?

import urllib.request

def read_text():
   quotes = open(r"C:\Self\General\Pooja\Edu_Career\Learning\Python\Code\Udacity_prog_foundn_python\movie_quotes.txt") #built in function
   contents_of_file = quotes.read().split()
   #print(contents_of_file)
    quotes.close()
    check_profanity(contents_of_file)

def check_profanity(text_to_check):
   flag = 0
   for word in text_to_check:
   connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+word)
   output = connection.read()
   # print(output)
   if b"true" in output:     # file is opened in bytes mode and output is in byte so compare byte to byte
       flag= flag +1

   if flag > 0:
       print("profanity alert")
   else:
       print("the text has no curse words")

  connection.close()

read_text()
python python-3.x
5个回答
1
投票

您使用的网站每次提取支持多个单词。因此,为了使你的代码更快:A)当你找到第一个诅咒词时打破循环。 B)发送超级单词到站点。因此:

def check_profanity(text_to_check):
  flag = 0
  super_word = ''
  for i in range(len(text_to_check)):
    if i < 100 and i < len(text_to_check): #100 or max number of words you can check at the same time
      super_word = super_word + " " + word
    else:         
      connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+super_word)
      super_word = ''
      output = connection.read()
      if b"true" in output:   
        flag = flag +1
        break
  if flag > 0:
    print("profanity alert")
  else:
    print("the text has no curse words")

1
投票

首先,作为Menno Van Dijk suggests,在本地存储一个常见的已知诅咒词的子集将允许预先快速检查亵渎,而不需要查询网站;如果找到已知的诅咒词,您可以立即发出警报,而无需检查任何其他内容。

其次,反转该建议,至少在本地缓存前几千个最常见的非诅咒词;没有理由说每个包含单词“is”,“the”或“a”的文本都应该一遍又一遍地重新检查这些单词。由于绝大多数书面英语主要使用两千个最常见的单词(而更大多数单词几乎只使用一万个最常用的单词),这可以节省大量的支票。

第三,在检查之前统一你的话;如果一个单词被重复使用,那么它第二次就像第一次一样好或坏,所以检查两次是浪费。

最后,作为MTMD suggests,该网站允许您批量查询,所以这样做。

在所有这些建议之间,您可能会从需要100,000个连接的100,000字文件到仅需要1-2个。虽然多线程可能帮助你的原始代码(以牺牲web服务为代价),但这些修复应该使多线程毫无意义;只有1-2个请求,您可以等待他们按顺序运行所需的第二个或第二个请求。

作为一个纯粹的风格问题,让read_textcheck_profanity是奇怪的;那些应该是单独的行为(read_text返回可以调用check_profanity的文本)。

根据我的建议(假设存在每行有一个已知单词的文件,一个用于坏词,一个用于好用):

import itertools  # For islice, useful for batching
import urllib.request

def load_known_words(filename):
    with open(filename) as f:
        return frozenset(map(str.rstrip, f))

known_bad_words = load_known_words(r"C:\path\to\knownbadwords.txt")
known_good_words = load_known_words(r"C:\path\to\knowngoodwords.txt")

def read_text():
    with open(r"C:\Self\General\Pooja\Edu_Career\Learning\Python\Code\Udacity_prog_foundn_python\movie_quotes.txt") as quotes:
        return quotes.read()

def check_profanity(text_to_check):
    # Uniquify contents so words aren't checked repeatedly
    if not isinstance(text_to_check, (set, frozenset)):
        text_to_check = set(text_to_check)

    # Remove words known to be fine from set to check
    text_to_check -= known_good_words

    # Precheck for any known bad words so loop is skipped completely if found
    has_profanity = not known_bad_words.isdisjoint(text_to_check)
    while not has_profanity and text_to_check:
        block_to_check = frozenset(itertools.islice(text_to_check, 100))
        text_to_check -= block_to_check

        with urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+' '.join(block_to_check)) as connection:
            output = connection.read()
        # print(output)
        has_profanity = b"true" in output

    if has_profanity:
        print("profanity alert")
    else:
        print("the text has no curse words")

text = read_text()
check_profanity(text.split())

0
投票

你可以做一些事情:

  1. 阅读批量文本
  2. 将每批文本提供给工作进程,然后检查亵渎语言。
  3. 引入一个缓存,可以离线保存常用的诅咒词,以最大限度地减少所需的HTTP请求数量

0
投票

使用多线程。 阅读批量文本。 将每个批次分配给一个线程并单独检查所有批次。


0
投票

一次发送很多单词。将number_of_words更改为您想要一次发送的字数。

import urllib.request

def read_text():
    quotes = open("test.txt")
    contents_of_file = quotes.read().split()
    quotes.close()
    check_profanity(contents_of_file)

def check_profanity(text):
    number_of_words = 200
    word_lists = [text[x:x+number_of_words] for x in range(0, len(text), number_of_words)]
    flag = False
    for word_list in word_lists:
        connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q=" + "%20".join(word_list))
        output = connection.read()
        if b"true" in output:
            flag = True
            break
        connection.close()
    if flag:
        print("profanity alert")
    else:
        print("the text has no curse words")

read_text()
© www.soinside.com 2019 - 2024. All rights reserved.