这个基准有效吗?对于中型(10000 x 10000)矩阵乘法(CPU),tinygrad 与 torch 或 numpy 相比快得不可思议

问题描述 投票:0回答:1

我在启用高内存的 google collab CPU 上运行了以下基准测试代码。请指出我进行基准测试的方式中的任何错误(如果有),以及为什么使用tinygrad可以获得如此高的性能提升。

# Set the size of the matrices
size = 10000

# Generate a random 10000x10000 matrix with NumPy
np_array = np.random.rand(size, size)

# Generate a random 10000x10000 matrix with PyTorch
torch_tensor = torch.rand(size, size)

# Generate a random 10000x10000 matrix with TinyGrad
tg_tensor = Tensor.rand(size, size)  

# Benchmark NumPy
start_np = time.time()
np_result = np_array @ np_array  # Matrix multiplication
np_time = time.time() - start_np
print(f"NumPy Time: {np_time:.6f} seconds")

# Benchmark PyTorch
start_torch = time.time()
torch_result = torch_tensor @ torch_tensor  # Matrix multiplication
torch_time = time.time() - start_torch
print(f"PyTorch Time: {torch_time:.6f} seconds")

# Benchmark TinyGrad
start_tg = time.time()
tg_result = tg_tensor @ tg_tensor  # Matrix multiplication
tg_time = time.time() - start_tg
print(f"TinyGrad Time: {tg_time:.6f} seconds")
  • NumPy 时间:11.977072 秒
  • PyTorch 时间:7.905509 秒
  • TinyGrad 时间:0.000607 秒

这就是结果。多次运行代码后,结果非常相似

python numpy pytorch benchmarking matrix-multiplication
1个回答
4
投票

Tinygrad 以“惰性”方式执行运算,因此尚未执行矩阵乘法。将矩阵乘法线更改为:

tg_result = (tg_tensor @ tg_tensor).realize()

tg_result = (tg_tensor @ tg_tensor).numpy()
© www.soinside.com 2019 - 2024. All rights reserved.