为什么我不能清理 pdf 表并将列重命名为函数?

问题描述 投票:0回答:1

我想出了如何抓取这个 PDF,但我有很多这样的文件需要浏览。 我的目的是将其设置为一个函数,从所有 pdf 中导入数据(几年内每月一个 pdf),然后执行 rbind() 来制作一个数据表,然后我可以将其写入为 csv。

这有效。

library(tidyverse)
library(tabulizer)

#import the data
jan16s_raw <- extract_tables("https://www.nvsos.gov/sos/home/showdocument?id=4062")

#create data frame
cleanNvsen <- do.call(rbind, jan16s_raw)
cleanNvsen2 <-as.data.frame(cleanNvsen[3:nrow(cleanNvsen),])

#rename all of the columns
names(cleanNvsen2)[1] <- "District"
names(cleanNvsen2)[2] <- "Democrat"
names(cleanNvsen2)[3] <- "Independent American"
names(cleanNvsen2)[4] <- "Libertarian"
names(cleanNvsen2)[5] <- "Nonpartisan"
names(cleanNvsen2)[6] <- "Other"
names(cleanNvsen2)[7] <- "Republican"
names(cleanNvsen2)[8] <- "Total"

#check to see if it worked
head(example)

但这会产生 1 x 1 数据框

library(tidyverse)
library(tabulizer)

#load data
jan16s_raw <- extract_tables("https://www.nvsos.gov/sos/home/showdocument?id=4062")

#create function to create data frame and then rename 
clean <- function(x) {
cleanNvsen <- do.call(rbind, x)
cleanNvsen2 <-as.data.frame(cleanNvsen[3:nrow(cleanNvsen),])

names(cleanNvsen2)[1] <- "District"
names(cleanNvsen2)[2] <- "Democrat"
names(cleanNvsen2)[3] <- "Independent American"
names(cleanNvsen2)[4] <- "Libertarian"
names(cleanNvsen2)[5] <- "Nonpartisan"
names(cleanNvsen2)[6] <- "Other"
names(cleanNvsen2)[7] <- "Republican"
names(cleanNvsen2)[8] <- "Total"
}

x2 <- clean(jan16s_raw)

head(x2)

我真的很想让它工作,这样我就可以向 R 提供 url,然后运行我创建的这个干净的函数。 我有几十个文件需要查看。

r pdf web-scraping pdf-scraping
1个回答
1
投票

您可以编写

clean
函数来提取数据并重命名列。我们可以一次重命名多个列,而不需要单独重命名它们。

clean <- function(url) {
  jan16s_raw <- extract_tables(url)
  #create data frame
  cleanNvsen <- do.call(rbind, jan16s_raw)
  cleanNvsen2 <- as.data.frame(cleanNvsen[3:nrow(cleanNvsen),])
  #rename all of the columns
  names(cleanNvsen2) <- c("District", "Democrat", "Independent American", 
                  "Libertarian","Nonpartisan","Other","Republican","Total")

  return(cleanNvsen2)
}

创建要从中提取数据的所有 url 的向量。

list_of_urls <- c('https://www.nvsos.gov/sos/home/showdocument?id=4062', 
                  'https://www.nvsos.gov/sos/home/showdocument?id=4064')

然后为每个 url 调用

clean
函数并合并数据。

all_data <- purrr::map_df(list_of_urls, clean)
#OR
#all_data <- do.call(rbind, lapply(list_of_urls, clean))
© www.soinside.com 2019 - 2024. All rights reserved.