Python 3 UnicodeDecodeError - 如何调试UnicodeDecodeError?

Mik*_*and 6 python unicode utf-8 character-encoding python-3.x

我有一个文本文件,出版商(美国证券交易委员会)声称用UTF-8编码(https://www.sec.gov/files/aqfs.pdf,第4节).我正在使用以下代码处理这些行:

def tags(filename):
    """Yield Tag instances from tag.txt."""
    with codecs.open(filename, 'r', encoding='utf-8', errors='strict') as f:
        fields = f.readline().strip().split('\t')
        for line in f.readlines():
            yield process_tag_record(fields, line)
Run Code Online (Sandbox Code Playgroud)

我收到以下错误:

Traceback (most recent call last):
  File "/home/randm/Projects/finance/secxbrl.py", line 151, in <module>
    main()
  File "/home/randm/Projects/finance/secxbrl.py", line 143, in main
    all_tags = list(tags("tag.txt"))
  File "/home/randm/Projects/finance/secxbrl.py", line 109, in tags
    content = f.read()
  File "/home/randm/Libraries/anaconda3/lib/python3.6/codecs.py", line 698, in read
    return self.reader.read(size)
  File "/home/randm/Libraries/anaconda3/lib/python3.6/codecs.py", line 501, in read
    newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3583587: invalid start byte
Run Code Online (Sandbox Code Playgroud)

鉴于我可能无法回到美国证券交易委员会并告诉他们他们的文件似乎没有用UTF-8编码,我应该如何调试并捕获此错误?

我试过了什么

我做了一个文件的hexdump,发现有问题的文字是"非招募投资的补充披露".如果我将有问题的字节解码为十六进制代码点(即"U + 00AD"),则它在上下文中是有意义的,因为它是软连字符.但以下似乎不起作用:

Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> b"\x41".decode("utf-8")
'A'
>>> b"\xad".decode("utf-8")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec cant decode byte 0xad in position 0: invalid start byte
>>> b"\xc2ad".decode("utf-8")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec cant decode byte 0xc2 in position 0: invalid continuation byte
Run Code Online (Sandbox Code Playgroud)

我已经习惯了errors='replace',似乎已经过去了.但我想了解如果我尝试将其插入数据库会发生什么.

编辑添加hexdump:

0036ae40  31 09 09 09 09 53 55 50  50 4c 45 4d 45 4e 54 41  |1....SUPPLEMENTA|
0036ae50  4c 20 44 49 53 43 4c 4f  53 55 52 45 20 4f 46 20  |L DISCLOSURE OF |
0036ae60  4e 4f 4e ad 43 41 53 48  20 49 4e 56 45 53 54 49  |NON.CASH INVESTI|
0036ae70  4e 47 20 41 4e 44 20 46  49 4e 41 4e 43 49 4e 47  |NG AND FINANCING|
0036ae80  20 41 43 54 49 56 49 54  49 45 53 3a 09 0a 50 72  | ACTIVITIES:..Pr|
Run Code Online (Sandbox Code Playgroud)

Mar*_*ers 8

您的数据文件已损坏.如果该字符真的是U + 00AD SOFT HYPHEN,那么你缺少一个0xC2字节:

>>> '\u00ad'.encode('utf8')
b'\xc2\xad'
Run Code Online (Sandbox Code Playgroud)

在以0xAD结尾的所有可能的UTF-8编码中,软连字符确实最有意义.但是,它表示可能缺少其他字节的数据集.你碰巧遇到了重要的事情.

我将返回到此数据集的源,并在下载时验证该文件是否已损坏.否则,error='replace'如果没有缺少分隔符(制表符,换行符等),则使用是可行的解决方法.

另一种可能性是SEC实际上对文件使用不同的编码; 例如,在Windows代码页1252和Latin-1中,0xAD是软连字符的正确编码.事实上,当我直接下载相同的数据集(警告,大型ZIP文件链接),并打开时tags.txt,我无法将数据解码为UTF-8:

>>> open('/tmp/2017q1/tag.txt', encoding='utf8').read()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/.../lib/python3.6/codecs.py", line 321, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3583587: invalid start byte
>>> from pprint import pprint
>>> f = open('/tmp/2017q1/tag.txt', 'rb')
>>> f.seek(3583550)
3583550
>>> pprint(f.read(100))
(b'1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH INVESTING AND FINANCING A'
 b'CTIVITIES:\t\nProceedsFromSaleOfIn')
Run Code Online (Sandbox Code Playgroud)

文件中有两个这样的非ASCII字符:

>>> f.seek(0)
0
>>> pprint([l for l in f if any(b > 127 for b in l)])
[b'SupplementalDisclosureOfNoncashInvestingAndFinancingActivitiesAbstract\t0'
 b'001654954-17-000551\t1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH I'
 b'NVESTING AND FINANCING ACTIVITIES:\t\n',
 b'HotelKranichhheMember\t0001558370-17-001446\t1\t0\tmember\tD\t\tHotel Krani'
 b'chhhe [Member]\tRepresents information pertaining to Hotel Kranichh\xf6h'
 b'e.\n']
Run Code Online (Sandbox Code Playgroud)

Hotel Kranichh\xf6he解读为Latin-1是HotelKranichhöhe.

文件中还有几个0xC1/0xD1对:

>>> f.seek(0)
0
>>> quotes = [l for l in f if any(b in {0x1C, 0x1D} for b in l)]
>>> quotes[0].split(b'\t')[-1][50:130]
b'Temporary Payroll Tax Cut Continuation Act of 2011 (\x1cTCCA\x1d) recognized during th'
>>> quotes[1].split(b'\t')[-1][50:130]
b'ributory defined benefit pension plan (the \x1cAetna Pension Plan\x1d) to allow certai'
Run Code Online (Sandbox Code Playgroud)

我敢打赌那些真的是U + 201C LEFT DOUBLE QUOTATION MARKU + 201D RIGHT DOUBLE QUOTATION MARK字符; 注意1C1D部分.它几乎感觉好像他们的编码器采用了UTF-16并剥离了所有高字节,而不是正确编码为UTF-8!

没有编解码器与Python出货,将编码'\u201C\u201D'b'\x1C\x1D',使得它更可能的是,SEC已某处拙劣的编码过程.实际上,还有0x13和0x14字符可能是enem破折号(U + 2013U + 2014),以及0x19字节,几乎可以肯定是单引号(U + 2019).完成图片所缺少的只是一个0x18字节来表示U + 2018.

如果我们假设编码被破坏,我们可以尝试修复.以下代码将读取文件并修复引号问题,假设其余数据除了引号之外不使用Latin-1之外的字符:

_map = {
    # dashes
    0x13: '\u2013', 0x14: '\u2014',
    # single quotes
    0x18: '\u2018', 0x19: '\u2019',
    # double quotes
    0x1c: '\u201c', 0x1d: '\u201d',
}
def repair(line, _map=_map):
    """Repair mis-encoded SEC data. Assumes line was decoded as Latin-1"""
    return line.translate(_map)
Run Code Online (Sandbox Code Playgroud)

然后将其应用于您阅读的行:

with open(filename, 'r', encoding='latin-1') as f:
    repaired = map(repair, f)
    fields = next(repaired).strip().split('\t')
    for line in repaired:
        yield process_tag_record(fields, line)
Run Code Online (Sandbox Code Playgroud)

另外,解决你发布的代码,你正在使Python比你需要的更努力.不要用codecs.open(); 这是遗留代码,它具有已知问题并且比较新的Python 3 I/O层慢.只是用open().不要用f.readlines(); 您不需要在此处将整个文件读入列表.直接迭代文件:

def tags(filename):
    """Yield Tag instances from tag.txt."""
    with open(filename, 'r', encoding='utf-8', errors='strict') as f:
        fields = next(f).strip().split('\t')
        for line in f:
            yield process_tag_record(fields, line)
Run Code Online (Sandbox Code Playgroud)

如果process_tag_record还在选项卡上拆分,请使用csv.reader()对象并避免手动拆分每一行:

import csv

def tags(filename):
    """Yield Tag instances from tag.txt."""
    with open(filename, 'r', encoding='utf-8', errors='strict') as f:
        reader = csv.reader(f, delimiter='\t')
        fields = next(reader)
        for row in reader:
            yield process_tag_record(fields, row)
Run Code Online (Sandbox Code Playgroud)

如果process_tag_recordfields列表与值组合在一起row形成字典,只需使用csv.DictReader():

def tags(filename):
    """Yield Tag instances from tag.txt."""
    with open(filename, 'r', encoding='utf-8', errors='strict') as f:
        reader = csv.DictReader(f, delimiter='\t')
        # first row is used as keys for the dictionary, no need to read fields manually.
        yield from reader
Run Code Online (Sandbox Code Playgroud)