图像到雷达值

问题描述 投票:0回答:1

我正在尝试执行与以下主题类似的操作:dBZ 值的雷达图像

然而,这里我有一个不同尺寸的图形和一个不那么简单的调色板。

图例: enter image description here

图片: enter image description here

脚本:

import numpy as np
import PIL.Image

import matplotlib.pyplot as plt
import matplotlib.cm

# numba is an optional import, just here to make the function run faster
import numba


# Separated this function out since its the majority of the run time and slow
@numba.njit()
def _get_distances(pixel: np.ndarray, calibration_inputs: np.ndarray):
    # Create the outarray where each position is the distance from the pixel at the matching index
    outarray = np.empty(shape=(calibration_inputs.shape[0]), dtype=np.int32)
    for i in range(calibration_inputs.shape[0]):
        # Calculate the vector difference
        #   NOTE: These must be signed integers to avoid issues with uinteger wrapping (see "nuclear gandhi")
        diff = calibration_inputs[i] - pixel
        outarray[i] = diff[0] ** 2 + diff[1] ** 2 + diff[2] ** 2
    return outarray


def _main():
    # How many ticks are on the axes in the legend
    calibration_point_count = 17
    fname = 'C:/Users/lucas-fagundes/Downloads/getImagem (1).png'
    fname_chart = 'C:/Users/lucas-fagundes/Downloads/legenda_ciram.png'
    # Whether to collect the calibration data or not
    setup_mode = False
    # The image of the radar screen
    img = np.array(PIL.Image.open(fname))
    # The chart legend with the colour bars
    img_chart = np.array(PIL.Image.open(fname_chart))

    if setup_mode:
        fig = plt.figure()
        plt.title('Select center of colourbar then each tick on legend')
        plt.imshow(img_chart)
        selections = plt.ginput(calibration_point_count + 1)
        # Use the first click to find the horizontal line to read
        calibration_x = int(selections[0][1])
        calibration_ys = np.array([int(y) for y, x in selections[1:]], dtype=int)
        plt.close(fig)
        # Request the tick mark values
        calibration_values = np.empty(shape=(calibration_point_count,), dtype=float)
        for i in range(calibration_point_count):
            calibration_values[i] = float(input(f'Enter calibration point value {i:2}: '))
        # Create a plot to verify that the bars were effectively captured
        for index, colour in enumerate(['red', 'green', 'blue']):
            plt.plot(img_chart[calibration_x, calibration_ys[0]:calibration_ys[-1], index],
                     color=colour)
        plt.title('Colour components in legend')
        plt.show()

    else:
        # If you have already run the calibration once, you can put that data here
        # This saves you alot of clicking in future runs
        calibration_x = 6
        calibration_ys = array([ 14,  43,  69,  93, 120, 152, 179, 206, 233, 259, 285, 312, 342, 371, 397, 421, 451])
        calibration_values = array([ 78. ,  73. ,  68. ,  63. ,  58. ,  53. ,  48. ,  43. ,  38. , 33. ,  28. ,  23. ,  18. ,  13. ,  10. , -10. , -31.5])
    # Record the pixel values to match the colours against
    calibration_inputs = img_chart[calibration_x, calibration_ys[0]:calibration_ys[-1], :3].astype(np.int32)
    # Print this information to console so that you can copy it into the code above and not rerun setup_mode
    print(f'{calibration_x = }')
    print(f'{calibration_ys = }')
    print(f'{calibration_values = }')
    # print(f'{calibration_inputs = }')

    # Make the output array the same size, but without RGB vector, just a magnitude
    arrout = np.zeros(shape=img.shape[:-1], dtype=img.dtype)
    # Iterate through every pixel (can be optimized alot if you need to run this frequently)
    for i in range(img.shape[0]):
        # This takes a while to run, so print some status throughout
        print(f'\r{i / img.shape[0] * 100:.2f}%', end='')
        for j in range(img.shape[1]):
            # Change the type so that the subtraction in the _get_distances function works appropriately
            pixel = img[i, j].astype(np.int32)
            # If this pixel is too dark, leave it as 0
            if np.sum(pixel) < 100:
                continue
            # idx contains the index of the closet match
            idx = np.argmin(_get_distances(pixel, calibration_inputs))
            # Interpolate the value against the chart and save it to the output array
            arrout[i, j] = np.interp(idx + calibration_ys[0], calibration_ys, calibration_values)
    # Create a custom cmap based on jet which looks the most like the input image
    #   This step isn't necessary, but helps us compare the input to the output
    cmap = matplotlib.colormaps['jet']
    cmap.set_under('k')  # If the value is below the bottom clip, set it to black
    fig, ax = plt.subplots(3, 1, gridspec_kw={'wspace': 0.01, 'hspace': 0.01}, height_ratios=(3, 3, 1))
    ax[0].imshow(arrout, cmap=cmap, vmin=0.5); ax[0].axis('off')
    ax[1].imshow(img); ax[1].axis('off'); ax[1].sharex(ax[0]);  ax[1].sharey(ax[0])
    ax[2].imshow(img_chart); ax[2].axis('off')
    plt.show()


if __name__ == '__main__':
    _main()

以下是与我的图像相关的错误:

enter image description here

该错误对应于我的图像只有一维,但我无法校准它。你能帮我吗?

python-3.x python-imaging-library numpy-ndarray
1个回答
0
投票

最大的问题似乎是由图像格式引起的。它们与您链接的问题中的不同,因此它们不会以相同的方式加载,从而导致您收到错误。下一个问题是数据本身,图例没有与主图像类似的 RGB 值,因此即使在从插值更改为最近查找的微小修正之后,也不是每个像素都适合。这意味着除非你能获得更好的图像(可能最好是 PNG 而不是 JPG,我怀疑这会导致颜色损坏),否则你不会得到很好的结果。

获得合适答案所需的更改摘要:

  • 通过向图像加载行添加
    .convert('RGBA')
    来强制 PIL 将图像解释为 RGB,这有助于解决加载中的差异,并允许我们以与链接问题中相同的方式索引图像。
  • calibration_inputs
    更改为仅选定的像素(而不是整行像素,因为我们只需要最接近的匹配,无需插值)
  • 更改
    arrout
    以预填充小于 -31.5 的某个值的浮点数,这是图例中的最低值,我选择了 -50 有点随意。任何小于我们底部夹子的东西都可以(见下文)。
  • 当从输入图像
    img
    中抓取像素时,我们希望索引
    [i, j, 0:3]
    返回以使其与像素距离计算保持一致。
  • 将插值更改为仅检索已通过
    numpy.argmin
    调用找到的最接近的值。检索到的索引已经与我们调整后的
    calibration_values
    对象相匹配。
  • 将图像演示中的底部剪辑
    vmin
    调整为低于图例中预期最低值 -31.5 的某个值。这将使所有无值像素不进行颜色映射。我选择了-40,它介于上面的填充值和最低期望值之间。

这是一个包含这些更改的版本,它可以与上面两个图像的下载一起使用。

import numpy as np
import PIL.Image

import matplotlib.pyplot as plt
import matplotlib.cm

# numba is an optional import, just here to make the function run faster
import numba


# Separated this function out since its the majority of the run time and slow
@numba.njit()
def _get_distances(pixel: np.ndarray, calibration_inputs: np.ndarray):
    # Create the outarray where each position is the distance from the pixel at the matching index
    outarray = np.empty(shape=(calibration_inputs.shape[0]), dtype=np.int32)
    for i in range(calibration_inputs.shape[0]):
        # Calculate the vector difference
        #   NOTE: These must be signed integers to avoid issues with uinteger wrapping (see "nuclear gandhi")
        diff = calibration_inputs[i] - pixel
        outarray[i] = diff[0] ** 2 + diff[1] ** 2 + diff[2] ** 2
    return outarray


def _main():
    # How many ticks are on the axes in the legend
    calibration_point_count = 17
    fname = 'radar_image.png'

    fname_chart = 'legend.jpg'
    # Whether to collect the calibration data or not
    setup_mode = False
    # The image of the radar screen
    img = np.array(PIL.Image.open(fname).convert('RGBA'))
    # The chart legend with the colour bars
    img_chart = np.array(PIL.Image.open(fname_chart))

    if setup_mode:
        fig = plt.figure()
        plt.title('Select center of colourbar then each tick on legend')
        plt.imshow(img_chart)
        selections = plt.ginput(calibration_point_count + 1)
        # Use the first click to find the horizontal line to read
        calibration_x = int(selections[0][1])
        calibration_ys = np.array([int(y) for y, x in selections[1:]], dtype=int)
        plt.close(fig)
        # Request the tick mark values
        calibration_values = np.empty(shape=(calibration_point_count,), dtype=float)
        for i in range(calibration_point_count):
            calibration_values[i] = float(input(f'Enter calibration point value {i:2}: '))
        # Create a plot to verify that the bars were effectively captured
        for index, colour in enumerate(['red', 'green', 'blue']):
            plt.plot(img_chart[calibration_x, calibration_ys[0]:calibration_ys[-1], index],
                     color=colour)
        plt.title('Colour components in legend')
        plt.show()

    else:
        # If you have already run the calibration once, you can put that data here
        # This saves you alot of clicking in future runs
        calibration_x = 6
        calibration_ys = np.array([ 14,  43,  69,  93, 120, 152, 179, 206, 233, 259, 285, 312, 342, 371, 397, 421, 451])
        calibration_values = np.array([ 78. ,  73. ,  68. ,  63. ,  58. ,  53. ,  48. ,  43. ,  38. , 33. ,  28. ,  23. ,  18. ,  13. ,  10. , -10. , -31.5])
    # Record the pixel values to match the colours against
    calibration_inputs = np.array([img_chart[calibration_x, x] for x in calibration_ys])
    # Print this information to console so that you can copy it into the code above and not rerun setup_mode
    print(f'{calibration_x = }')
    print(f'{calibration_ys = }')
    print(f'{calibration_values = }')
    # print(f'{calibration_inputs = }')

    # Make the output array the same size, but without RGB vector, just a magnitude
    arrout = np.full(shape=img.shape[:-1], fill_value=-50.0, dtype=np.float32)
    # Iterate through every pixel (can be optimized alot if you need to run this frequently)
    for i in range(img.shape[0]):
        # This takes a while to run, so print some status throughout
        print(f'\r{i / img.shape[0] * 100:.2f}%', end='')
        for j in range(img.shape[1]):
            # Change the type so that the subtraction in the _get_distances function works appropriately
            pixel = img[i, j, 0:3].astype(np.int32)
            # If this pixel is too dark, leave it as 0
            if np.sum(pixel) < 100:
                continue
            # idx contains the index of the closet match
            idx = np.argmin(_get_distances(pixel, calibration_inputs))
            # Grab the legend value for the pixel with the closest match
            arrout[i, j] = calibration_values[idx]
    # Create a custom cmap based on jet which looks the most like the input image
    #   This step isn't necessary, but helps us compare the input to the output
    cmap = matplotlib.colormaps['jet']
    cmap.set_under('k')  # If the value is below the bottom clip, set it to black
    fig, ax = plt.subplots(3, 1, gridspec_kw={'wspace': 0.01, 'hspace': 0.01}, height_ratios=(3, 3, 1))
    ax[0].imshow(arrout, cmap=cmap, vmin=-40); ax[0].axis('off')
    ax[1].imshow(img); ax[1].axis('off'); ax[1].sharex(ax[0]);  ax[1].sharey(ax[0])
    ax[2].imshow(img_chart); ax[2].axis('off')
    plt.show()


if __name__ == '__main__':
    _main()

这是我运行时得到的图。

sample output

图像颜色丢失会产生许多伪影。如果你能得到两个PNG,你可能会得到更好的结果,但我不会保证。

如果您有任何疑问,请告诉我。

© www.soinside.com 2019 - 2024. All rights reserved.