Har*_*son 0 python google-app-engine memcached
我一直在努力让memcache在我的应用程序上运行一段时间.我以为我终于让它工作了,它永远不会从数据库中读取(除非memcache数据当然丢失),只是因为数据存储区读数过多而导致我的网站关闭!我目前正在使用免费的appspot,并希望尽可能长时间保持这种状态.无论如何,这是我的代码,也许有人可以帮我找到它的漏洞.
我目前正在尝试通过重写db.Model.all(),delete()和put()方法来首先查询memcache来实现memcache.我有memcache设置数据存储区中的每个对象都有自己的memcache值,其id为密钥.然后,对于每个Model类,我都有一个知道如何查询的键下的id列表.我希望我能够清楚地解释这一点.
""" models.py """
@classmethod
def all(cls, order="sent"):
result = get_all("messages", Message)
if not result or memcache.get("updatemessages"):
result = list(super(Message, cls).all())
set_all("messages", result)
memcache.set("updatemessages", False)
logging.info("DB Query for messages")
result.sort(key=lambda x: getattr(x, order), reverse=True)
return result
@classmethod
def delete(cls, message):
del_from("messages", message)
super(Message, cls).delete(message)
def put(self):
super(Message, self).put()
add_to_all("messages", self)
""" helpers.py """
def get_all(type, Class):
all = []
ids = memcache.get(type+"allid")
query_amount = 0
if ids:
for id in ids:
ob = memcache.get(str(id))
if ob is None:
ob = Class.get_by_id(int(id))
if ob is None:
continue
memcache.set(str(id), ob)
query_amount += 1
all.append(ob)
if query_amount: logging.info(str(query_amount) + " ob queries")
return all
return None
def add_to_all(type, object):
memcache.set(str(object.key().id()), object)
all = memcache.get(type+"allid")
if not all:
all = [str(ob.key().id()) for ob in object.__class__.all()]
logging.info("DB query for %s" % type)
assert all is not None, "query returned None. Send this error code to ____: 2 3-193A"
if not str(object.key().id()) in all:
all.append(str(object.key().id()))
memcache.set(type+"allid", all)
@log_on_fail
def set_all(type, objects):
assert type in ["users", "messages", "items"], "set_all was not passed a valid type. Send this error code to ____: 33-205"
assert not objects is None, "set_all was passed None as the list of objects. Send this error code to _________: 33-206"
all = []
for ob in objects:
error = not memcache.set(str(ob.key().id()), ob)
if error:
logging.warning("keys not setting properly. Object must not be pickleable")
all.append(str(ob.key().id()))
memcache.set(type+"allid", all)
@log_on_fail
def del_from(type, object):
all = memcache.get(type+"allid")
if not all:
all = object.__class__.all()
logging.info("DB query %s" % type)
assert all, "Could not find any objects. Send this error code to _____: 13- 219"
assert str(object.key().id()) in all, "item not found in cache. Send this error code to ________: 33-220"
del all[ all.index(str(object.key().id())) ]
memcache.set(type+"allid", all)
memcache.delete(str(object.key().id()))
Run Code Online (Sandbox Code Playgroud)
我为所有的混乱和缺乏优雅道歉.希望有人能够提供帮助.我已经考虑过切换到ndb但是现在我宁愿坚持我的自定义缓存.你会注意到的logging.info("some-number of ob queries").我经常得到这个日志.也许每半小时一两次.memcache真的丢失了我的代码常常或有问题的数据吗?
| 归档时间: |
|
| 查看次数: |
1475 次 |
| 最近记录: |