我在启用高内存的 google collab CPU 上运行了以下基准测试代码。请指出我进行基准测试的方式中的任何错误(如果有),以及为什么使用tinygrad可以获得如此高的性能提升。
# Set the size of the matrices
size = 10000
# Generate a random 10000x10000 matrix with NumPy
np_array = np.random.rand(size, size)
# Generate a random 10000x10000 matrix with PyTorch
torch_tensor = torch.rand(size, size)
# Generate a random 10000x10000 matrix with TinyGrad
tg_tensor = Tensor.rand(size, size)
# Benchmark NumPy
start_np = time.time()
np_result = np_array @ np_array # Matrix multiplication
np_time = time.time() - start_np
print(f"NumPy Time: {np_time:.6f} seconds")
# Benchmark PyTorch
start_torch = time.time()
torch_result = torch_tensor @ torch_tensor # Matrix multiplication
torch_time = time.time() - start_torch
print(f"PyTorch Time: {torch_time:.6f} seconds")
# Benchmark TinyGrad
start_tg = time.time()
tg_result = tg_tensor @ tg_tensor # Matrix multiplication
tg_time = time.time() - start_tg
print(f"TinyGrad Time: {tg_time:.6f} seconds")
NumPy Time: 11.977072 seconds PyTorch Time: 7.905509 seconds TinyGrad Time: 0.000607 seconds
这就是结果。多次运行代码后,结果非常相似
Tinygrad 以“惰性”方式执行运算,因此尚未执行矩阵乘法。将矩阵乘法线更改为:
tg_result = (tg_tensor @ tg_tensor).realize()