我正在尝试捕获邮递员的回复,它需要存储在 csv 中。
这是我尝试过的代码,但没有得到预期的输出
UsergroupURL="https://"+ dex.3ds.com"
UsergroupsURL= UsergroupURL + "/3drdd/resources/b1/usersgroup?select=title,description,owner,members,pending_members,creation_date,modification_date&top=100&skip=0"
skip = 0
all_results = []
while True:
usergroupresponse = session.post(UsergroupsURL, data = data)
if usergroupresponse.status_code != 200:
Failmsg="Failed to post usergroup. Status code : " + str(response.status_code)
sys.exit(Failmsg)
results = usergroupresponse.json()
print(results)
if len(results) == 0:
# No more results to retrieve
break
# Append the results to the all_results list
all_results += results
# Increment skip to skip the previously retrieved results
skip += 100
# Update the URL with the new skip value
UsergroupsURL = UsergroupURL + "/3drdd/resources/b1/usersgroup?select=title,description,owner,members,pending_members,creation_date,modification_date&top=100&skip=" + str(skip)
print(results)
# Write the results to a CSV file
with open('response.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for row in all_results:
writer.writerow(row.values())
在此代码中,第一次点击 URL 后只会获取 100 条数据,但 URL 包含多个数据,因此这里传递skip参数以检索 URL 中的所有数据,前 100 条数据将捕获到 CSV 文件
我期待当我第二次尝试点击 URL 时,跳过值应更改为 100,以便它将检索第二个 100 数据,第一个 100 将被跳过,因此当我尝试第三次点击 URL 时,第二个 100 数据将被捕获到 CSV 文件此时跳过值应更改为 200,以便它将检索其余数据,应一直点击该值,直到 URL 没有响应
这是我更新的代码:
需要单独更改这部分,
csv_filename = f"response_{skip_value}.csv"
with open(csv_filename, 'w', newline='', encoding='utf-8', errors='ignore') as csvfile:
writer = csv.writer(csvfile)
header = ["uri","title", "description", "owner", "members", "pending_members", "creation_date", "modification_date"]
writer.writerow(header)
for group in results['groups']:
writer.writerow(group[x] for x in header)
if not results:
break
# Increment the skip value by 100
skip_value += 100