Python:限制用于发布到服务器的json字符串的大小

sea*_*ers 5 python post json

我将数十万条JSON记录发布到MAX数据上传限制为1MB的服务器上.我的记录大小可以变化,从几百字节到几十万字节不等.

def checkSize(payload):
    return len(payload) >= bytesPerMB 


toSend = []
for row in rows:
    toSend.append(row)
    postData = json.dumps(toSend)
    tooBig = tooBig or checkSize()
    if tooBig:
          sendToServer(postData)
Run Code Online (Sandbox Code Playgroud)

然后发布到服务器.它目前有效,但是对于一个jsonified字符串的持续倾销看起来真的很重,几乎100%太多了,虽然我似乎无法找到另一种方法.我是否可以将各个新记录串联起来并记录它们在一起的内容?

我敢肯定必须有一个更清洁的方式,但我只是不知道.

感谢您给予的任何和所有帮助.


这就是我现在使用的答案,我在下面与@rsegal同时提出它,只是为了清晰和完成而发布(sendToServer只是一个虚拟函数来显示工作正常),

import pickle
import json

f = open("userProfiles")
rows = pickle.load(f)
f.close()

bytesPerMB = 1024 * 1024
comma = ","
appendSize = len(comma)

def sendToServer(obj):
    #send to server
    pass

def checkSize(numBytes):
    return numBytes >= bytesPerMB

def jsonDump(obj):
    return json.dumps(obj, separators=(comma, ":"))

leftover = []
numRows = len(rows)
rowsSent = 0

while len(rows) > 0:
    toSend = leftover[:]
    toSendSize = len( jsonDump(toSend) )
    leftover = []
    first = len(toSend) == 0

    while True:
        try:
            row = rows.pop()
        except IndexError:
            break

        rowSize = len( jsonDump(row) ) + (0 if first else appendSize)
        first = False

        if checkSize(toSendSize + rowSize):
            leftover.append(row)
            break

        toSend.append(row)
        toSendSize += rowSize

    rowsSent += len(toSend)
    postData = jsonDump(toSend)
    print "assuming to send '{0}' bytes, actual size '{1}'. rows sent {2}, total {3}".format(toSendSize, len(postData), rowsSent, numRows)
    sendToServer(postData)
Run Code Online (Sandbox Code Playgroud)

rse*_*gal 2

我会做类似以下的事情:

toSend = []
toSendLength = 0
for row in rows:
    tentativeLength = len(json.dumps(row))
    if tentativeLength > bytesPerMB:
        parsingBehavior # do something about lolhuge files
    elif toSendLength + tentativeLength > bytesPerMB: # it would be too large
        sendToServer(json.dumps(toSend)) # don't exceed limit; send now
        toSend = [row] # refresh for next round - and we know it fits!
        toSendLength = tentativeLength
    else: # otherwise, it wont be too long, so add it in
        toSend.append(row)
        toSendLength += tentative
sentToServer(json.dumps(toSend)) # if it finishes below the limit
Run Code Online (Sandbox Code Playgroud)

您的解决方案的问题在于,从 Big-O 的角度来看,它并不是很好。我的以线性时间运行,你的以二次时间运行,因为你要检查每个循环的累积长度。每次都重置 postData 也不是很有效。