如何解决芹菜在Django中的TemplateDoesNotExist错误 celerydjangotutorial / urls.py celerydjangotutorial / views.py scraping / models.py scraping / tasks.py celerydjangotutorial / celery.py settings.py
我正在通过此tutorial学习将芹菜与Django一起使用。在本教程中,该人员将使用django和celery开发一个webscraping工具。我正在尝试按照本教程进行操作,但遇到以下错误消息
TemplateDoesNotExist at /
home.html,scraping/products_list.html
这是我的文件排列方式
.
├── celerybeat-schedule.db
├── celerydjangotutorial
│ ├── __init__.py
│ ├── asgi.py
│ ├── celery.py
│ ├── settings.py
│ ├── templates
│ │ ├── base.html
│ │ └── home.html
│ ├── urls.py
│ ├── views.py
│ └── wsgi.py
├── db.sqlite3
├── manage.py
└── scraping
├── __init__.py
├── admin.py
├── apps.py
├── migrations
│ ├── 0001_initial.py
│ ├── 0002_auto_20200827_0735.py
│ ├── __init__.py
│ └── __pycache__
│ ├── 0001_initial.cpython-38.pyc
│ ├── 0002_auto_20200827_0735.cpython-38.pyc
│ └── __init__.cpython-38.pyc
├── models.py
├── tasks.py
├── tests.py
└── views.py
其中 celerydjangotutorial 是django项目,而 scraping 是django应用程序。在教程中,人员将模板放置到项目目录中。他还在项目目录中创建了一个views文件。我跟随他,就像他在本教程中一样。这是我的代码。
celerydjangotutorial / urls.py
from django.contrib import admin
from django.urls import path,include
from .views import HomePageView
urlpatterns = [
path('',HomePageView.as_view(),name='home'),path('admin/',admin.site.urls),]
celerydjangotutorial / views.py
from django.shortcuts import render
from django.views import generic
from scraping.models import Products
class HomePageView(generic.ListView):
template_name = 'home.html'
context_object_name = 'products'
def get_queryset(self):
return Products.objects.all()
scraping / models.py
from django.db import models
class Products(models.Model):
title = models.CharField(max_length=200)
link = models.CharField(max_length=2083,default="",unique=True)
created_at = models.DateTimeField(auto_Now_add=True)
updated_at = models.DateTimeField(auto_Now=True)
website = models.CharField(max_length=30,blank=True,null=True)
scraping / tasks.py
import time
import json
from datetime import datetime
from celery import Celery
from celery.schedules import crontab
from celery import shared_task
from selenium import webdriver
from selenium.webdriver.support.ui import webdriverwait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException,NoSuchElementException
from selenium.webdriver.common.keys import Keys
@shared_task
def scrape():
try:
print('starting the scraping process')
product_list = []
url = f'https://www.amazon.com/s?k=chips&ref=nb_sb_noss_2'
driver = webdriver.Chrome('./chromedriver')
driver.get(url)
driver.implicitly_wait(15)
links = driver.find_elements_by_class_name("a-size-mini")
for link in links:
element = webdriverwait(driver,5).until(
EC.presence_of_element_located((By.LINK_TEXT,link.text)))
product_Meta = {
'link': element.get_attribute('href'),'website': 'amazon'
}
product_list.append(product_Meta)
print('scraping process completed')
return save_function(product_list)
except Exception as e:
print('scraping Failed')
print(e)
@shared_task(serializer='json')
def save_function(product_list):
print('saving extracted data')
new_count = 0
for product in product_list:
try:
Products.objects.create(
title = product['title'],link = product['link'],website = product['website']
)
new_count += 1
except Exception as e:
print('Failed at latest_product is None')
print(e)
break
return print('finished')
celerydjangotutorial / celery.py
from __future__ import absolute_import
import os
from celery import Celery
from celery.schedules import crontab
os.environ.setdefault('DJANGO_SETTINGS_MODULE','celerydjangotutorial.settings')
app = Celery('celerydjangotutorial')
app.conf.timezone = 'UTC'
app.config_from_object("django.conf:settings",namespace="CELERY")
app.autodiscover_tasks()
settings.py
...
INSTALLED_APPS = [
'django.contrib.admin','django.contrib.auth','django.contrib.contenttypes','django.contrib.sessions','django.contrib.messages','django.contrib.staticfiles','scraping.apps.ScrapingConfig'
]
...
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates','Dirs': ['templates'],'APP_Dirs': True,'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug','django.template.context_processors.request','django.contrib.auth.context_processors.auth','django.contrib.messages.context_processors.messages',],},]
...
CELERY_broKER_URL = 'amqp://localhost:5672'
CELERY_RESULT_BACKEND = 'amqp://localhost:5672'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
我不确定自己在做什么错。我遵循了大部分教程。除了task.py之外,其余代码与他的代码相似。请帮助我。
预先感谢
[EDIT-1]发布整个错误
Internal Server Error: /
Traceback (most recent call last):
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/core/handlers/exception.py",line 34,in inner
response = get_response(request)
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/core/handlers/base.py",line 145,in _get_response
response = self.process_exception_by_middleware(e,request)
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/core/handlers/base.py",line 143,in _get_response
response = response.render()
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/template/response.py",line 105,in render
self.content = self.rendered_content
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/template/response.py",line 81,in rendered_content
template = self.resolve_template(self.template_name)
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/template/response.py",line 63,in resolve_template
return select_template(template,using=self.using)
File "/Users/sashaanksekar/anaconda3/lib/python3.8/site-packages/django/template/loader.py",line 47,in select_template
raise TemplateDoesNotExist(','.join(template_name_list),chain=chain)
django.template.exceptions.TemplateDoesNotExist: home.html,scraping/products_list.html
解决方法
尝试在模板文件夹下添加应用程序名称,然后将HTML文件移入其中。 所以应该像 模板/ celerydjangotutorial / base.html home.html
,检查文件,文件夹和路径的名称
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。