NumPy Fancy Indexing - 从不同渠道中获取不同的投资回报率

问题描述 投票:3回答:2

假设我们有以下numpy 4D数组(3x1x4x4):

import numpy as np
n, c, h, w = 3, 1, 4, 4

data = np.arange(n * c * h * w).reshape(n, c, h, w)

>>> array([[[[ 0,  1,  2,  3],
             [ 4,  5,  6,  7],
             [ 8,  9, 10, 11],
             [12, 13, 14, 15]]],

           [[[16, 17, 18, 19],
             [20, 21, 22, 23],
             [24, 25, 26, 27],
             [28, 29, 30, 31]]],

           [[[32, 33, 34, 35],
             [36, 37, 38, 39],
             [40, 41, 42, 43],
             [44, 45, 46, 47]]]])

现在我想在相同大小的不同位置裁剪每个n子阵列:

size = 2
locations = np.array([
    [0, 1],
    [1, 1],
    [0, 2]
])

执行此操作的缓慢方法如下:

crops = np.stack([d[:, y:y+size, x:x+size] 
     for d, (y,x) in zip(data, locations)])

>>> array([[[[ 1,  2],
             [ 5,  6]]],

           [[[21, 22],
             [25, 26]]],

           [[[34, 35],
             [38, 39]]]])

现在我正在寻找一种方法来实现这个与numpy的花式索引。我已经花了几个小时才弄明白,如何解决这个问题。我是否忽略了解决这个问题的简单方法?那里有一些numpy索引专家,谁能帮帮我?

python arrays numpy indexing
2个回答
1
投票

我们可以将this solution扩展到你的3D案例,利用基于np.lib.stride_tricks.as_stridedsliding-windowed views进行有效的补丁提取,就像这样 -

from skimage.util.shape import view_as_windows

def get_patches(data, locations, size):
    # Get 2D sliding windows for each element off data
    w = view_as_windows(data, (1,1,size,size))

    # Use fancy/advanced indexing to select the required ones
    return w[np.arange(len(locations)), :, locations[:,0], locations[:,1]][:,:,0,0]

我们需要那些1,1作为view_as_windows的窗口参数,因为它期望窗口具有与输入数据的dims数量相同的元素数量。我们沿着data的最后两个轴滑动,因此将前两个保持为1s,基本上没有沿着data的前两个轴滑动。

样本运行单通道和多通道数据 -

In [78]: n, c, h, w = 3, 1, 4, 4 # number of channels = 1
    ...: data = np.arange(n * c * h * w).reshape(n, c, h, w)
    ...: 
    ...: size = 2
    ...: locations = np.array([
    ...:     [0, 1],
    ...:     [1, 1],
    ...:     [0, 2]
    ...: ])
    ...: 
    ...: crops = np.stack([d[:, y:y+size, x:x+size] 
    ...:      for d, (y,x) in zip(data, locations)])

In [79]: print np.allclose(get_patches(data, locations, size), crops)
True

In [80]: n, c, h, w = 3, 5, 4, 4 # number of channels = 5
    ...: data = np.arange(n * c * h * w).reshape(n, c, h, w)
    ...: 
    ...: size = 2
    ...: locations = np.array([
    ...:     [0, 1],
    ...:     [1, 1],
    ...:     [0, 2]
    ...: ])
    ...: 
    ...: crops = np.stack([d[:, y:y+size, x:x+size] 
    ...:      for d, (y,x) in zip(data, locations)])

In [81]: print np.allclose(get_patches(data, locations, size), crops)
True

Benchmarking

其他方法 -

# Original soln
def stack(data, locations, size):
    crops = np.stack([d[:, y:y+size, x:x+size] 
         for d, (y,x) in zip(data, locations)])    
    return crops

# scholi's soln
def allocate_assign(data, locations, size):
    n, c, h, w = data.shape
    crops = np.zeros((n,c,size,size))
    for i, (y,x) in enumerate(locations):
        crops[i,0,:,:] = data[i,0,y:y+size,x:x+size]
    return crops

从评论来看,似乎OP对形状为(512,1,60,60)size12,24,48的数据感兴趣。所以,让我们用一个函数设置相同的数据 -

# Setup data
def create_inputs(size):
    np.random.seed(0)
    n, c, h, w = 512, 1, 60, 60
    data = np.arange(n * c * h * w).reshape(n, c, h, w)
    locations = np.random.randint(0,3,(n,2))
    return data, locations, size

计时 -

In [186]: data, locations, size = create_inputs(size=12)

In [187]: %timeit stack(data, locations, size)
     ...: %timeit allocate_assign(data, locations, size)
     ...: %timeit get_patches(data, locations, size)
1000 loops, best of 3: 1.26 ms per loop
1000 loops, best of 3: 1.06 ms per loop
10000 loops, best of 3: 124 µs per loop

In [188]: data, locations, size = create_inputs(size=24)

In [189]: %timeit stack(data, locations, size)
     ...: %timeit allocate_assign(data, locations, size)
     ...: %timeit get_patches(data, locations, size)
1000 loops, best of 3: 1.66 ms per loop
1000 loops, best of 3: 1.55 ms per loop
1000 loops, best of 3: 470 µs per loop

In [190]: data, locations, size = create_inputs(size=48)

In [191]: %timeit stack(data, locations, size)
     ...: %timeit allocate_assign(data, locations, size)
     ...: %timeit get_patches(data, locations, size)
100 loops, best of 3: 2.8 ms per loop
100 loops, best of 3: 3.33 ms per loop
1000 loops, best of 3: 1.45 ms per loop

0
投票

堆叠很慢。由于大小已知,最好先分配裁剪的数组。

crops = np.zeros((3,1,size,size))
for i, (y,x) in enumerate(locations):
    crops[i,0,:,:] = data[i,0,y:y+size,x:x+size]

Divakar的解决方案速度最快。我使用%% timeit得到92.3μs你的堆栈解决方案是35.4μs,我提供的示例得到29.3μs。

© www.soinside.com 2019 - 2024. All rights reserved.