错误:查询在 Aurora postgresql 过程中没有结果数据的目标

问题描述 投票:0回答:1
Copy to clipboard | Top
CREATE PROCEDURE Archivedata() LANGUAGE PLPGSQL AS $$
Declare count int;
declare maxcount int;
declare fulltablename varchar(100);
declare ptablename varchar(100);
declare bucketname varchar(100);
BEGIN

DROP TABLE IF EXISTS detachedtable;
Create temporary table DetachedTable(tableName varchar(200),rankk int);
Insert into DetachedTable(tableName,rankk)
select relname,row_number() over(order by relname ASC) as rankk from pg_class
join pg_namespace n on n.oid = relnamespace where relkind = 'r' and relispartition ='f'
and relname like '%_p20%' and n.nspname = 'dbo';

count:=1;
Select count(1) as tablecount into maxcount from DetachedTable;

raise notice 'Total no of table is =%',maxcount;
    while (count<=maxcount) loop
    select 'dbo.'||tableName into fulltablename from DetachedTable where rankk=count;
    select tableName into ptablename from DetachedTable where rankk=count;
    select substring(tableName,1,(position('_p20' in tableName)-1)) into bucketname from DetachedTable where rankk=count;
        SELECT * FROM aws_s3.query_export_to_s3('Select * from '|| fulltablename,
aws_commons.create_s3_uri(
'archive-bucket',
bucketname||'/'||ptablename,'us-east-2'));

    count:=count+1;
    end loop;
END;$$;

上面的存储过程在运行时抛出错误, 错误是

错误:查询没有结果数据的目的地 提示:如果您想放弃 SELECT 的结果,请使用 PERFORM。 上下文:PL/pgSQL 函数 archivedata() 第 30 行 SQL 语句

SQL 状态:42601

我正在尝试使用 pg_partman 归档数据。此存储过程旨在识别分离的分区并将其数据传输到 S3 存储桶。它不涉及选择任何特定数据;它只是使用 aws_s3.query_export_to_s3 函数将整个分区移动到 S3。

我们有多个表,程序需要动态确定每个表合适的存储桶名称和文件夹。由于我是 PostgreSQL 和 AWS 的新手,我正在寻求解决此问题的指导。

存储过程是最适合此任务的方法吗?还是我应该考虑使用函数?

postgresql amazon-aurora pg-partman
1个回答
0
投票

您遇到的错误源于 PostgreSQL 在未使用结果时处理 PL/pgSQL 中的

SELECT
语句的方式。在 PL/pgSQL 中,如果您执行返回数据的
SELECT
语句,并且您没有捕获或处理该数据,PostgreSQL 将引发错误。

这是您应该修改存储数据的方式

CREATE PROCEDURE Archivedata() LANGUAGE PLPGSQL AS $$
DECLARE
    count int;
    maxcount int;
    fulltablename varchar(100);
    ptablename varchar(100);
    bucketname varchar(100);
BEGIN
    DROP TABLE IF EXISTS detachedtable;
    CREATE TEMPORARY TABLE DetachedTable(tableName varchar(200), rankk int);
    INSERT INTO DetachedTable(tableName, rankk)
    SELECT relname, ROW_NUMBER() OVER (ORDER BY relname ASC) AS rankk
    FROM pg_class
    JOIN pg_namespace n ON n.oid = relnamespace
    WHERE relkind = 'r' AND relispartition = 'f'
    AND relname LIKE '%_p20%' AND n.nspname = 'dbo';

    count := 1;
    SELECT COUNT(1) INTO maxcount FROM DetachedTable;

    RAISE NOTICE 'Total number of tables is =%', maxcount;
    WHILE count <= maxcount LOOP
        SELECT 'dbo.' || tableName INTO fulltablename FROM DetachedTable WHERE rankk = count;
        SELECT tableName INTO ptablename FROM DetachedTable WHERE rankk = count;
        SELECT SUBSTRING(tableName, 1, (POSITION('_p20' IN tableName) - 1)) INTO bucketname FROM DetachedTable WHERE rankk = count;

        PERFORM aws_s3.query_export_to_s3(
            'SELECT * FROM ' || fulltablename,
            aws_commons.create_s3_uri(
                'archive-bucket',
                bucketname || '/' || ptablename,
                'us-east-2'
            )
        );

        count := count + 1;
    END LOOP;
END;
$$;

通过将

SELECT * FROM
替换为
PERFORM
,您可以告诉 PostgreSQL 您不需要处理函数的结果,从而防止出现错误。

更改此:

SELECT * FROM aws_s3.query_export_to_s3('Select * from '|| fulltablename,
aws_commons.create_s3_uri(
'archive-bucket',
bucketname||'/'||ptablename,'us-east-2'));

对此:

PERFORM aws_s3.query_export_to_s3('Select * from ' || fulltablename,
aws_commons.create_s3_uri(
    'archive-bucket',
    bucketname || '/' || ptablename,
    'us-east-2'
));

这应该可以解决问题。让我知道它是否有效。

© www.soinside.com 2019 - 2024. All rights reserved.