APACHE PHOENIX 错误:org.apache.phoenix.mapreduce.CsvBulkLoadTool - 约束错误

问题描述 投票:0回答:0

I 在 CLI 中执行下一条命令:

hbase org.apache.phoenix.mapreduce.CsvBulkLoadTool -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily=1024 -t TABLE -c ID,c1,c2,c3 -i /hive_to_hbase/output_2023.csv

执行返回下一个:

Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException: ERROR 218 (23018): Constraint violation. TABLE.ID may not be null
        at org.apache.phoenix.mapreduce.FormatToBytesWritableMapper.map(FormatToBytesWritableMapper.java:212)
        at org.apache.phoenix.mapreduce.FormatToBytesWritableMapper.map(FormatToBytesWritableMapper.java:78)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 218 (23018): Constraint violation. TEST_DM_CDR_DWH.ID may not be null
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at org.apache.phoenix.mapreduce.FormatToBytesWritableMapper$MapperUpsertListener.errorOnRecord(FormatToBytesWritableMapper.java:409)
        at org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:100)
        at org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:52)
        at org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
        at org.apache.phoenix.mapreduce.FormatToBytesWritableMapper.map(FormatToBytesWritableMapper.java:181)

我在 apache-phoenix 中用下面的 sql 语句创建了表:

CREATE TABLE TEST_DM_CDR_DWH(
    id VARCHAR PRIMARY KEY,
    "COLUMNS"."c1" VARCHAR,
    "COLUMNS"."c2" VARCHAR,
    "COLUMNS"."c3" VARCHAR
)TTL=31536000,COMPRESSION='ZSTD';

和这个索引

CREATE INDEX TEST_DM_CDR_DWH_IDX ON TEST_DM_CDR_DWH ("COLUMNS"."c1") INCLUDE("COLUMNS"."c2");

有人可以给我解释一下吗?

我只在谷歌上搜索答案。

apache hadoop hbase apache-phoenix
© www.soinside.com 2019 - 2024. All rights reserved.