R: Increase the scraper speed rvest?

I've just started scraping the R rvest library. All too daring, I started by asking for 3206 subpages, for each of which I want to extract a row.

string to srape

The problem is this:

Duration.

My question is:

Can I optimize my script (below) to make it run faster?

Background:

Quoting for 3 integers only works fine, but my script for everyone is now working for a long time. I don't know of any Python (which I could change since I heard there is a thing called aiohttp

). If there is no other way, I would appreciate it if someone could provide a link to a good tutorial or alternative solution.

Script

library(rvest)
library(data.table)
#READ IN HTML
#Link: http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro
hydro <- read_html("http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro")
#GET ATTRIBUTES
attributes <- hydro %>%
  html_nodes("#list a") %>%
  html_attrs()
#WRITE URLs
urls = list()
for (i in 1:3206) {
  da <- unlist(attributes[i])
  dat <- da[1]
  data <- paste("http://www.globalenergyobservatory.org/",dat, sep="")
  urls[[i]] <- data
}
#GET ABSTRACTS
abstracts = list()
for(i in 1:3206) {
  to_use <- read_html(urls[[i]])
  to_use %>%
    html_nodes("#Abstract_Block td") %>%
    html_text() -> to_write
  abstracts[[i]] <- to_write
}

      

+3


source to share


1 answer


All great comments. I also suggest that you do the same.

library(rvest)
library(data.table)
#READ IN HTML
#Link: http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro
hydro <- read_html("http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro")
#GET ATTRIBUTES
attributes <- paste0("http://www.globalenergyobservatory.org/",
                     unlist(hydro 
                            %>% html_nodes("#list a") 
                            %>% html_attrs())[seq_along(unlist(hydro 
                                                             %>% html_nodes("#list a") 
# YOUR METHOD                                                             %>% html_attrs())) %% 2 > 0])
time = proc.time()
abstracts <- 0
for(i in 1:100) {
  page<-html_session(attributes[i])
  abstracts[i]<-html_nodes(read_html(page),css="#Abstract_Block td") %>% html_text()
}
print(proc.time()-time)


# PROPOSED METHOD
time = proc.time()
library(doSNOW)
library(foreach)
cluster = makeCluster(2, type = "SOCK")
registerDoSNOW(cluster)
abstracts<-function(attributes){
  library(rvest)
  page<-html_session(attributes)
  abstracts<-html_nodes(read_html(page),css="#Abstract_Block td") %>% html_text()
  return(abstracts)
}
big_list<-unlist(foreach(i = 1:100) %dopar% abstracts(attributes[i]))
print(proc.time()-time)
stopCluster(cluster)

      

For your method, the output looks like this

user  system elapsed 
6.01    0.31   61.48

      



For my method

user  system elapsed 
0.26    0.08   16.33 

      

reduced computation time by about 75%

+4


source







All Articles