我想在执行期间激活或停用某些类方法中的“缓存”。
我找到了一种方法来激活它:
(...)
setattr(self, "_greedy_function", my_cache_decorator(self._cache)(getattr(self, "_greedy_function")))
(...)
其中self._cache
是我自己的缓存对象,用于存储self._greedy_function
的结果。
它工作正常,但现在如果我想停用缓存并“uncorate”_greedy_function
怎么办?
我看到了一个可能的解决方案,在装饰它之前存储_greedy_function
的引用,但也许有一种方法可以从装饰函数中检索它,这样会更好。
根据要求,这里是我用来缓存我的类函数结果的装饰器和缓存对象:
import logging
from collections import OrderedDict, namedtuple
from functools import wraps
logging.basicConfig(
level=logging.WARNING,
format='%(asctime)s %(name)s %(levelname)s %(message)s'
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
CacheInfo = namedtuple("CacheInfo", "hits misses maxsize currsize")
def lru_cache(cache):
"""
A replacement for functools.lru_cache() build on a custom LRU Class.
It can cache class methods.
"""
def decorator(func):
logger.debug("assigning cache %r to function %s" % (cache, func.__name__))
@wraps(func)
def wrapped_func(*args, **kwargs):
try:
ret = cache[args]
logger.debug("cached value returned for function %s" % func.__name__)
return ret
except KeyError:
try:
ret = func(*args, **kwargs)
except:
raise
else:
logger.debug("cache updated for function %s" % func.__name__)
cache[args] = ret
return ret
return wrapped_func
return decorator
class LRU(OrderedDict):
"""
Custom implementation of a LRU cache, build on top of an Ordered dict.
"""
__slots__ = "_hits", "_misses", "_maxsize"
def __new__(cls, maxsize=128):
if maxsize is None:
return None
return super().__new__(cls, maxsize=maxsize)
def __init__(self, maxsize=128, *args, **kwargs):
self.maxsize = maxsize
self._hits = 0
self._misses = 0
super().__init__(*args, **kwargs)
def __getitem__(self, key):
try:
value = super().__getitem__(key)
except KeyError:
self._misses += 1
raise
else:
self.move_to_end(key)
self._hits += 1
return value
def __setitem__(self, key, value):
super().__setitem__(key, value)
if len(self) > self._maxsize:
oldest, = next(iter(self))
del self[oldest]
def __delitem__(self, key):
try:
super().__delitem__((key,))
except KeyError:
pass
def __repr__(self):
return "<%s object at %s: %s>" % (self.__class__.__name__, hex(id(self)), self.cache_info())
def cache_info(self):
return CacheInfo(self._hits, self._misses, self._maxsize, len(self))
def clear(self):
super().clear()
self._hits, self._misses = 0, 0
@property
def maxsize(self):
return self._maxsize
@maxsize.setter
def maxsize(self, maxsize):
if not isinstance(maxsize, int):
raise TypeError
elif maxsize < 2:
raise ValueError
elif maxsize & (maxsize - 1) != 0:
logger.warning("LRU feature performs best when maxsize is a power-of-two, maybe.")
while maxsize < len(self):
oldest, = next(iter(self))
print(oldest)
del self[oldest]
self._maxsize = maxsize
编辑:我已经使用评论中建议的__wrapped__属性更新了我的代码,它运行正常!整个事情就在这里:https://gist.github.com/fbparis/b3ddd5673b603b42c880974b23db7cda(kik.set_cache()方法...)
现代版本的functools.wraps
将原始函数作为属性__wrapped__
安装在它们创建的包装器上。 (人们可以在通常用于此目的的嵌套函数上搜索__closure__
,但也可以使用其他类型。)期望任何包装器遵循此约定是合理的。
另一种方法是使用一个可由标志控制的永久包装器,以便可以启用和禁用它而无需删除和恢复它。这具有以下优点:包装器可以保持其状态(这里是缓存的值)。标志可以是单独的变量(例如,带有包装函数的对象上的另一个属性,如果有的话),或者可以是包装器本身的属性。
你把事情弄得太复杂了。可以通过del self._greedy_function
简单地删除装饰器。不需要__wrapped__
属性。
这是set_cache
和unset_cache
方法的最小实现:
class LRU(OrderedDict):
def __init__(self, maxsize=128, *args, **kwargs):
# ...
self._cache = dict()
super().__init__(*args, **kwargs)
def _greedy_function(self):
time.sleep(1)
return time.time()
def set_cache(self):
self._greedy_function = lru_cache(self._cache)(getattr(self, "_greedy_function"))
def unset_cache(self):
del self._greedy_function
使用你的装饰者lru_cache
,结果如下
o = LRU()
o.set_cache()
print('First call', o._greedy_function())
print('Second call',o._greedy_function()) # Here it prints out the cached value
o.unset_cache()
print('Third call', o._greedy_function()) # The cache is not used
输出
First call 1552966668.735025
Second call 1552966668.735025
Third call 1552966669.7354007