我正在寻找一种简单的方法来了解数组和字典对象的字节大小,比如
[ [1,2,3], [4,5,6] ] or { 1:{2:2} }
很多主题都说使用pylab,例如:
from pylab import *
A = array( [ [1,2,3], [4,5,6] ] )
A.nbytes
24
但是,字典怎么样?我看到很多答案建议使用pysize或heapy。 Torsten Marek在这个链接中给出了一个简单的答案:Which Python memory profiler is recommended?,但我对输出没有明确的解释,因为字节数不匹配。
Pysize似乎更复杂,我还没有明确如何使用它。
鉴于我想要执行的大小计算的简单性(没有类或复杂的结构),任何关于如何简单估计这种对象的内存使用量的想法?
亲切的问候。
有:
>>> import sys
>>> sys.getsizeof([1,2, 3])
96
>>> a = []
>>> sys.getsizeof(a)
72
>>> a = [1]
>>> sys.getsizeof(a)
80
但我不会说它是可靠的,因为Python对每个对象都有开销,并且有些对象除了引用其他对象之外什么也没有,所以它与C和其他语言不完全相同。
阅读有关sys.getsizeof的文档并从那里开始我想。
派对有点晚,但是获得dict大小的简单方法就是首先腌制它。
在python对象(包括字典)上使用sys.getsizeof可能不准确,因为它不计算引用的对象。
处理它的方法是将其序列化为字符串并在字符串上使用sys.getsizeof。结果将更接近您想要的结果。
import cPickle
mydict = {'key1':'some long string, 'key2':[some, list], 'key3': whatever other data}
做sys.getsizeof(mydict)并不是那么精确,先腌渍它
mydict_as_string = cPickle.dumps(mydict)
现在我们可以知道它需要多少空间
print sys.getsizeof(mydict_as_string)
这里的答案都不是真正的通用。
以下解决方案将以递归方式处理任何类型的对象,而无需昂贵的递归实现:
import gc
import sys
def get_obj_size(obj):
marked = {id(obj)}
obj_q = [obj]
sz = 0
while obj_q:
sz += sum(map(sys.getsizeof, obj_q))
# Lookup all the object referred to by the object in obj_q.
# See: https://docs.python.org/3.7/library/gc.html#gc.get_referents
all_refr = ((id(o), o) for o in gc.get_referents(*obj_q))
# Filter object that are already marked.
# Using dict notation will prevent repeated objects.
new_refr = {o_id: o for o_id, o in all_refr if o_id not in marked and not isinstance(o, type)}
# The new obj_q will be the ones that were not marked,
# and we will update marked with their ids so we will
# not traverse them again.
obj_q = new_refr.values()
marked.update(new_refr.keys())
return sz
例如:
>>> import numpy as np
>>> x = np.random.rand(1024).astype(np.float64)
>>> y = np.random.rand(1024).astype(np.float64)
>>> a = {'x': x, 'y': y}
>>> get_obj_size(a)
16816
有关更多信息,请参阅my repository,或者只需安装我的软件包(objsize):
$ pip install objsize
然后:
>>> from objsize import get_deep_size
>>> get_deep_size(a)
16816
使用此配方,取自此处:
http://code.activestate.com/recipes/577504-compute-memory-footprint-of-an-object-and-its-cont/
from __future__ import print_function
from sys import getsizeof, stderr
from itertools import chain
from collections import deque
try:
from reprlib import repr
except ImportError:
pass
def total_size(o, handlers={}, verbose=False):
""" Returns the approximate memory footprint an object and all of its contents.
Automatically finds the contents of the following builtin containers and
their subclasses: tuple, list, deque, dict, set and frozenset.
To search other containers, add handlers to iterate over their contents:
handlers = {SomeContainerClass: iter,
OtherContainerClass: OtherContainerClass.get_elements}
"""
dict_handler = lambda d: chain.from_iterable(d.items())
all_handlers = {tuple: iter,
list: iter,
deque: iter,
dict: dict_handler,
set: iter,
frozenset: iter,
}
all_handlers.update(handlers) # user handlers take precedence
seen = set() # track which object id's have already been seen
default_size = getsizeof(0) # estimate sizeof object without __sizeof__
def sizeof(o):
if id(o) in seen: # do not double count the same object
return 0
seen.add(id(o))
s = getsizeof(o, default_size)
if verbose:
print(s, type(o), repr(o), file=stderr)
for typ, handler in all_handlers.items():
if isinstance(o, typ):
s += sum(map(sizeof, handler(o)))
break
return s
return sizeof(o)
##### Example call #####
if __name__ == '__main__':
d = dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars')
print(total_size(d, verbose=True))