Django中的每请求缓存?

Cha*_*ert 18 django django-cache

我想实现一个装饰器,它为任何方法提供每请求缓存,而不仅仅是视图.这是一个示例用例.

我有一个自定义标记,用于确定长记录列表中的记录是否为"收藏夹".为了检查项目是否是收藏夹,您必须查询数据库.理想情况下,您将执行一个查询以获取所有收藏夹,然后只针对每条记录检查缓存列表.

一种解决方案是在视图中获取所有收藏夹,然后将该集合传递到模板中,然后传递到每个标签调用中.

或者,标记本身可以执行查询本身,但仅在第一次调用时执行.然后可以为后续调用缓存结果.好处是,您可以在任何视图上使用任何模板中的此标记,而无需提醒视图.

在现有的缓存机制中,您可以将结果缓存50毫秒,并假设它与当前请求相关联.我想让这种相关性变得可靠.

这是我目前拥有的标签示例.

@register.filter()
def is_favorite(record, request):

    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:

        favorites = get_favorites(request.user)

        post = request.POST.copy()
        post["get_favorites"] = favorites
        request.POST = post

    return record in favorites
Run Code Online (Sandbox Code Playgroud)

有没有办法从Django获取当前请求对象,没有传递它?从标签,我可以传递请求,它将始终存在.但是我想从其他函数中使用这个装饰器.

是否存在每请求缓存的现有实现?

hre*_*ef_ 24

使用自定义中间件,您可以获得保证为每个请求清除的Django缓存实例.

这是我在项目中使用的:

from threading import currentThread
from django.core.cache.backends.locmem import LocMemCache

_request_cache = {}
_installed_middleware = False

def get_request_cache():
    assert _installed_middleware, 'RequestCacheMiddleware not loaded'
    return _request_cache[currentThread()]

# LocMemCache is a threadsafe local memory cache
class RequestCache(LocMemCache):
    def __init__(self):
        name = 'locmemcache@%i' % hash(currentThread())
        params = dict()
        super(RequestCache, self).__init__(name, params)

class RequestCacheMiddleware(object):
    def __init__(self):
        global _installed_middleware
        _installed_middleware = True

    def process_request(self, request):
        cache = _request_cache.get(currentThread()) or RequestCache()
        _request_cache[currentThread()] = cache

        cache.clear()
Run Code Online (Sandbox Code Playgroud)

要使用中间件,请在settings.py中注册它,例如:

MIDDLEWARE_CLASSES = (
    ...
    'myapp.request_cache.RequestCacheMiddleware'
)
Run Code Online (Sandbox Code Playgroud)

然后,您可以按如下方式使用缓存:

from myapp.request_cache import get_request_cache

cache = get_request_cache()
Run Code Online (Sandbox Code Playgroud)

有关更多信息,请参阅django低级别缓存api文档:

Django低级缓存API

修改memoize装饰器以使用请求缓存应该很容易.看一下Python装饰器库,获取memoize装饰器的一个很好的例子:

Python装饰器库

  • 警惕这个解决方案!随着越来越多的线程被打开以便为您的用户提供服务,_request_cache字典将不断填满,并且它永远不会被清理掉.根据Web服务器存储Python全局变量的方式,这可能会导致内存泄漏. (4认同)
  • 是的 - 清除process_response和process_expception上的缓存 - 在django cuser中间件插件中有一个非常好的例子.请参阅:https://github.com/Alir3z4/django-cuser/blob/master/cuser/middleware.py (2认同)

cor*_*ror 6

编辑:

我想出的最终解决方案已编译成 PyPI 包:https ://pypi.org/project/django-request-cache/

编辑2016年6月15日:

我发现了一个明显更简单的解决这个问题的方法,但我有点不好意思,因为我从一开始就没有意识到这应该是多么容易。

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()
Run Code Online (Sandbox Code Playgroud)

这样,您就可以将request.cache其用作缓存实例,该实例的生存期与请求一样长request,并且在请求完成时将被垃圾收集器完全清理。

如果您需要从通常不可用的上下文中访问该request对象,则可以使用可以在线找到的所谓“全局请求中间件”的各种实现之一。

** 初步答复:**

其他解决方案无法解决的一个主要问题是,当您在单个进程的生命周期中创建和销毁多个 LocMemCache 时,LocMemCache 会泄漏内存。django.core.cache.backends.locmem定义了几个全局字典,其中保存对每个 LocalMemCache 实例的缓存数据的引用,并且这些字典永远不会被清空。

下面的代码解决了这个问题。它首先是 @href_ 的答案和 @squarelogic.hayden 的评论中链接的代码所使用的更清晰的逻辑的组合,然后我进一步对其进行了完善。

from uuid import uuid4
from threading import current_thread

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


# Global in-memory store of cache data. Keyed by name, to provides multiple
# named local memory caches.
_caches = {}
_expire_info = {}
_locks = {}


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache with a destructor, ensuring that creating
    and destroying RequestCache objects over and over doesn't leak memory.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want
        # BaseCache.__init__() to run, we *don't* want LocMemCache.__init__() to run.
        BaseCache.__init__(self, {})

        # Use a name that is guaranteed to be unique for each RequestCache instance.
        # This ensures that it will always be safe to call del _caches[self.name] in
        # the destructor, even when multiple threads are doing so at the same time.
        self.name = uuid4()
        self._cache = _caches.setdefault(self.name, {})
        self._expire_info = _expire_info.setdefault(self.name, {})
        self._lock = _locks.setdefault(self.name, RWLock())

    def __del__(self):
        del _caches[self.name]
        del _expire_info[self.name]
        del _locks[self.name]


class RequestCacheMiddleware(object):
    """
    Creates a cache instance that persists only for the duration of the current request.
    """

    _request_caches = {}

    def process_request(self, request):
        # The RequestCache object is keyed on the current thread because each request is
        # processed on a single thread, allowing us to retrieve the correct RequestCache
        # object in the other functions.
        self._request_caches[current_thread()] = RequestCache()

    def process_response(self, request, response):
        self.delete_cache()
        return response

    def process_exception(self, request, exception):
        self.delete_cache()

    @classmethod
    def get_cache(cls):
        """
        Retrieve the current request's cache.

        Returns None if RequestCacheMiddleware is not currently installed via 
        MIDDLEWARE_CLASSES, or if there is no active request.
        """
        return cls._request_caches.get(current_thread())

    @classmethod
    def clear_cache(cls):
        """
        Clear the current request's cache.
        """
        cache = cls.get_cache()
        if cache:
            cache.clear()

    @classmethod
    def delete_cache(cls):
        """
        Delete the current request's cache object to avoid leaking memory.
        """
        cache = cls._request_caches.pop(current_thread(), None)
        del cache
Run Code Online (Sandbox Code Playgroud)

编辑2016-06-15:我发现了一个明显更简单的解决方案来解决这个问题,并且有点因为没有意识到这从一开始就应该有多容易而感到有些尴尬。

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()
Run Code Online (Sandbox Code Playgroud)

有了这个,您可以使用request.cache其用作缓存实例,该实例的生存期与请求一样长request,并且在请求完成时将被垃圾收集器完全清理。

如果您需要从通常不可用的上下文中访问该request对象,则可以使用可以在线找到的所谓“全局请求中间件”的各种实现之一。