django.db.utils.InternalError:(1050,“表'django_content_type'已经存在”)
我刚刚从我的朋友那里复制了一个项目,当我运行 makemirations 时它运行正常。但对于 -
python3 manage.py migrate
Run Code Online (Sandbox Code Playgroud)
它给出了这个错误 -
Operations to perform:
Apply all migrations: admin, auth, balancesheet, contenttypes, dynapp, pandl2, sessions, trialbal2
Running migrations:
Applying contenttypes.0001_initial...Traceback (most recent call last):
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/home/hostbooks/django1/myproject/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result …Run Code Online (Sandbox Code Playgroud) DRF 中的 request.data 和 DRF 中的 serializers.data 有什么区别。
当我在 DRF 中编写基于函数的视图时,我同时使用它们 -
elif request.method == 'POST':
serializer = datesSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
Run Code Online (Sandbox Code Playgroud)
和,
startdate = serializer.data['startdate']
enddate = serializer.data['enddate']
Run Code Online (Sandbox Code Playgroud)
但是找不到它们的区别以及在代码中使用它们的区别。
如何在 Django Rest-API 中使用“serializer.initial_data”。使用request.data和serializer.initial_data有什么区别。
我正在使用 Python 制作一个聊天机器人。代码:
import nltk
import numpy as np
import random
import string
f=open('/home/hostbooks/ML/stewy/speech/chatbot.txt','r',errors = 'ignore')
raw=f.read()
raw=raw.lower()# converts to lowercase
sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences
word_tokens = nltk.word_tokenize(raw)# converts to list of words
lemmer = nltk.stem.WordNetLemmatizer()
def LemTokens(tokens):
return [lemmer.lemmatize(token) for token in tokens]
remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation)
def LemNormalize(text):
return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict)))
GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey","hii")
GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I am glad! You …Run Code Online (Sandbox Code Playgroud)