所以我有三个数据源,我想连接在一起产生一些输出。
File1.json:378mb
File2.json:72kb
file3.json:500kb
@extractFile1 = EXTRACT columnList FROM PATH "path/File1.json" USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
@extractFile2 = EXTRACT columnList FROM PATH "path/File2.json" USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
@extractFile3 = EXTRACT columnList FROM PATH "path/File3.json" USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
@result =
SELECT f1.column, f2.column, f1.column, f3.column
from @extractFile3 AS f3
INNER JOIN (
SELECT f3new.column,
f3new.column AS somename
from @extractFile1 AS f1
INNER JOIN @ExtractFile3 f3new ON f1.column == f3new.column
GROUP BY f3new.column
) AS first
ON f3.column == somename
INNER JOIN @extractFile1 AS f1 ON f3.column == f1.column
INNER JOIN @extractFile2 as f2 ON f1.column == f3.column
执行此操作会导致作业图中的合并操作显示写入:195GB,并且仍在继续。它在一个顶点上运行了70分钟。
有谁知道执行计划中的组合操作甚至能够写出那么多数据?
你试过打开InputFileGrouping preview feature吗?在ADLA中处理数百个小型JSON文件时,我的性能得到了显着提升。