如何将用户哈希为azure databricks中的随机值

问题描述 投票:0回答:1

我们在文件存储中有一个 Excel 文件,每个 Excel 文件中包含超过 10,000 列 JSON 数据。

例如,如下所示的示例。 ** 注意:我的文件存储中有 Excel 文件,它包含超过 10,000 列 JSON 数据。比如 10,000 个 json,我需要读取整个 Excel 并执行用户转换 **

json 1:

{"SearchName":"","Id":"","RequestType":"","StartDateUtc":"2022-12-01T00:00:00Z","EndDateUtc":"2023-04-28T00:00:00Z","RecordType":null,"Workload":"","Operations":[],"Users":["d503246e-285c-41bc-8b0a-bc79824146ea,[email protected],ab6019a4c-1e03ee9be97a,[email protected],85ff-b51c-b88ad4d55b5a,[email protected],48168530-6-8985-65f9b0af2b85,[email protected],0937a1e5-8a68-4573-ae9c-e13f9a2f3617,[email protected],c822dd8b-0b79-4c13-af1e-bc080b8108c5,[email protected],ca0de5ba-6ab2-4d34-b19d-ca702dcbdb8d,[email protected]"],"ObjectIds":[],"IPAddresses":[],"SiteIds":null,"AssociatedAdminUnits":[],"FreeText":"multifactor","ResultSize":0,"TimeoutInSeconds":345600,"ScopedAdminWithoutAdminUnits":false}

json2:

{"SearchName":"xiong.jie.wu2"",""Id":"6797200c-4-a40c-6d8cfe7d6c16"",""RequestType":"AuditSearch"",""StartDateUtc":"2023-12-01T00:00:00Z"",""EndDateUtc":"2024-01-26T00:00:00Z"",""RecordType":null",""RecordTypes":[]",""Workload":null",""Workloads":[]",""WorkloadsToInclude":null",""WorkloadsToExclude":null",""ScopedWorkloadSearchEnabled":false",""Operations":["copy"",""harddelete"",""movetodeleteditems"",""move"",""softdelete"",""new-inboxrule"",""set-inboxrule"",""updateinboxrules"",""add-mailboxpermission"",""addfolderpermissions"",""modifyfolderpermissions"]",""Users":["[email protected]"]",""ObjectIds":[]",""IPAddresses":[]",""SiteIds":null",""AssociatedAdminUnits":[]",""FreeText":""",""ResultSize":0",""TimeoutInSeconds":345600",""ScopedAdminWithoutAdminUnits":false}

..............................就像一个 Excel 文件中的 10000 个 json

我们只是想将用户哈希值更改为普通掩码值。
像这样:对于整个 Excel 文件用户,将

[email protected]
转换为
[email protected]

每次我们像下面这样手动复制用户数据和屏蔽,都会花费我们很多时间。然后,无论我们得到什么输出,我们只需用输出替换哈希值。

随机导入

main=['[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]']  
  
l=["0e07209b-807b-4938-8bfd-f87cee98e924,[email protected],c747a82c-656e-40eb-9194-88c4a0f8061e"]  
n=len(l)  
print(n)  
print(random.sample(main,n))

我的问题 azure databricks 有没有办法一次将整个 Excel 文件用户 json 键哈希值替换为随机用户

[email protected]
,然后重写回特定位置

azure azure-databricks
1个回答
0
投票

我尝试过以下方法: 安装以下库:

%pip install openpyxl
%pip install xlrd

在下面的代码中,它屏蔽了存储在 Excel 文件中的 JSON 数据中的电子邮件地址:

import pandas as pd
import json
import os
dbfs_input_path = 'dbfs:/FileStore/tables/sample_excel_file.xlsx'
local_input_path = '/tmp/sample_excel_file.xlsx'
dbutils.fs.cp(dbfs_input_path, 'file:' + local_input_path)
df = pd.read_excel(local_input_path)
def mask_emails(user_list):
    return ["[email protected]" for _ in user_list]
for col in df.columns:
    for idx, json_data in df[col].items():
        try:
            json_obj = json.loads(json_data)
            if 'Users' in json_obj:
                json_obj['Users'] = mask_emails(json_obj['Users'])
            df.at[idx, col] = json.dumps(json_obj)
        except (json.JSONDecodeError, TypeError):
            continue  


local_output_path = '/tmp/transformed_excel_file.xlsx'
df.to_excel(local_output_path, index=False)
dbfs_output_path = 'dbfs:/FileStore/tables/transformed_excel_file.xlsx'
dbutils.fs.cp('file:' + local_output_path, dbfs_output_path)
print(f"Transformed file saved to {dbfs_output_path}")
Transformed file saved to dbfs:/FileStore/tables/transformed_excel_file.xlsx

在上面的代码中,使用

dbutils.fs.cp
DBFS 和本地文件系统之间复制文件。 并暂时将文件存储在本地进行处理,并确保将它们复制回DBFS

结果:

dbfs_output_path = 'dbfs:/FileStore/tables/transformed_excel_file.xlsx'
local_output_path = '/tmp/transformed_excel_file.xlsx'
dbutils.fs.cp(dbfs_output_path, 'file:' + local_output_path)
df_transformed = pd.read_excel(local_output_path)
df_transformed.display()
Column1 Column2
{"SearchName": "", "Id": "", "RequestType": "", "StartDateUtc": "2022-12-01T00:00:00Z", "EndDateUtc": "2023-04-28T00:00:00Z", "RecordType": null, "Workload": "", "Operations": [], "Users": ["[email protected]"], "ObjectIds": [], "IPAddresses": [], "SiteIds": null, "AssociatedAdminUnits": [], "FreeText": "multifactor", "ResultSize": 0, "TimeoutInSeconds": 345600, "ScopedAdminWithoutAdminUnits": false}   {"SearchName": "xiong.jie.wu2", "Id": "6797200c-4-a40c-6d8cfe7d6c16", "RequestType": "AuditSearch", "StartDateUtc": "2023-12-01T00:00:00Z", "EndDateUtc": "2024-01-26T00:00:00Z", "RecordType": null, "RecordTypes": [], "Workload": null, "Workloads": [], "WorkloadsToInclude": null, "WorkloadsToExclude": null, "ScopedWorkloadSearchEnabled": false, "Operations": ["copy", "harddelete", "movetodeleteditems", "move", "softdelete", "new-inboxrule", "set-inboxrule", "updateinboxrules", "add-mailboxpermission", "addfolderpermissions", "modifyfolderpermissions"], "Users": ["[email protected]"], "ObjectIds": [], "IPAddresses": [], "SiteIds": null, "AssociatedAdminUnits": [], "FreeText": "", "ResultSize": 0, "TimeoutInSeconds": 345600, "ScopedAdminWithoutAdminUnits": false}
© www.soinside.com 2019 - 2024. All rights reserved.