Stop intermediate execution of the dplyr / tidyr chain and save the calculation results
I have a written custom function that takes a while to run on a large dataset and sometimes stops. My function is a window function (for example cumsum
). If I stop executing, all data will be lost. Is there a way in tidyr
and dplyr
out to save the data as it goes to avoid this?
My data is in wide format and I am running the function on groups (like Products) and on many variables (like metrics).
Product Year a b c d
1 A 2012 -0.54884514 -0.15416417 0.54861146 1.04147041
2 A 2013 1.22642587 1.43655028 -0.71433978 0.23523411
3 A 2014 -1.49161792 0.53356645 0.44964089 -0.01657906
4 A 2015 -0.72283864 -0.30601369 -0.04536668 -1.24809562
5 A 2016 0.41150740 1.42205301 0.59239525 1.82255169
6 B 2012 0.07279991 1.87163670 1.45773252 -1.93302885
7 B 2013 1.02705536 -2.70856122 0.57013708 1.35345098
8 B 2014 1.35513596 0.05818042 -0.41595725 -2.07142883
9 B 2015 0.40750419 0.13024750 -0.89163416 0.44227276
10 B 2016 0.25391609 0.02908517 -1.62128177 1.83811852
11 C 2012 -0.70568556 0.37254186 -0.61830412 -1.61228981
12 C 2013 -0.97811352 0.73741264 -0.60743864 0.12820628
13 C 2014 -0.20605945 -1.26239900 -0.21926510 -0.29185710
14 C 2015 -1.07297893 2.17374995 -0.29045520 -0.15203030
15 C 2016 -1.51221585 0.87294266 0.26420813 -0.70152124
16 D 2012 0.44717558 0.07587063 0.62215522 0.76882890
17 D 2013 -1.71815014 2.60236385 0.14437641 -0.60752707
18 D 2014 0.50659673 -0.57601702 0.09140279 -1.18971359
19 D 2015 -1.27493812 -0.76221085 0.58623989 0.37937413
20 D 2016 2.03280890 -0.39427715 0.29775332 0.88033461
If I use the tidy method, I can just gather
data and then group_by
. This works, but I cannot stop the average execution without losing all of the results.
# The tidy way
dt2 <- dt %>%
gather(Metric,Value,3:6) %>%
group_by(Product,Metric) %>%
mutate(Metric2 = paste0(Metric,2),
Value2 = cumsum(Value)) %>%
ungroup() %>%
select(-Value, -Metric) %>% # I would love to leave the original metric in if possible
spread(Metric2,Value2)
If I don't use the tidy method, I can stop execution at any time and the results for that point will be saved.
# The non-tidy way
dt2 <- tibble()
#pb = txtProgressBar(min = 0, max = 4, initial = 0, style = 3)
for(i in 1:4) {
single_product <- dt[which(dt$Product == unique(dt$Product)[i]),]
for(j in 3:6) {
single_metric <- single_product[,c(1:2,j)]
single_metric[,paste0(colnames(single_metric[3]),2)] <- cumsum(single_metric[3])
single_product <- left_join(single_product,single_metric)
}
dt2 <- bind_rows(dt2,single_product)
#setTxtProgressBar(pb,i)
}
Bonus points if we can add a progress bar. Here's the dummy data:
# The data
dt <- expand.grid(Product=LETTERS[1:4], Metric = letters[1:4], Year = 2012:2016)
dt$Value <- rnorm(nrow(dt))
dt <- dt %>%
spread(Metric, Value)
source to share
The easiest way I can think of to save progress is using a cache. The code below memoize_fun
takes a function to calculate a value ( value_fun
) and a function to calculate a key for that value ( key_fun
). In this case, the key is Product, and the value is the complete data frame that you want to compute for that product. I have added posts to show when the cache is full and used. Note that if the statement do
takes more than a few seconds, dplyr should automatically add a progress bar. You should see this the first time you run, where the execution time is artificially inflated using calls Sys.sleep
.
library(dplyr)
library(tidyr)
library(magrittr)
dt <- expand.grid(Product=LETTERS, Metric = letters[1:4], Year = 2012:2016)
dt$Value <- rnorm(nrow(dt))
dt <- dt %>%
spread(Metric, Value)
my_cache <- list()
memoize_fun <- function(value_fun, key_fun) {
function(...) {
key <- as.character(key_fun(...))
message("Using key", deparse(key))
assert_that(is.character(key))
assert_that(length(key) == 1)
if (! key %in% names(my_cache)) {
message("Computing value for ", deparse(key))
my_cache[[key]] <<- value_fun(...)
Sys.sleep(1)
} else {
message("Re-using stored value for ", deparse(key))
}
return (my_cache[[key]])
}
}
metrics <- colnames(dt)[3:6]
system.time({
dt2 <- dt %>%
group_by(Product) %>%
do({
value_fun <- . %>% cbind(., CumSum=transmute_all(.[metrics], cumsum))
key_fun <- . %>% .$Product %>% .[1]
memoize_fun(value_fun, key_fun)(.)
})
})
## Run the same thing again to demonstrate that everything is cached
system.time({
dt2 <- dt %>%
group_by(Product) %>%
do({
value_fun <- . %>% cbind(., CumSum=transmute_all(.[metrics], cumsum))
key_fun <- . %>% .$Product %>% .[1]
memoize_fun(value_fun, key_fun)(.)
})
})
We can also demonstrate that this works for restarting on random errors by adding a 50% failure chance to each computation, then wrapping it with code that keeps retrying until it reaches the end:
my_cache <- list() # Reset the cache
finished <- FALSE
tries <- 1
while (! finished) {
message("Attempt number ", tries)
tryCatch({
dt2 <- dt %>%
group_by(Product) %>%
do({
value_fun <- . %>% cbind(., CumSum=transmute_all(.[metrics], cumsum)) %T>%
{ if (runif(1) > 0.5) stop("Random error")}
key_fun <- . %>% .$Product %>% .[1]
memoize_fun(value_fun, key_fun)(.)
})
finished <- TRUE
},
error=function(...) NULL)
tries <- tries + 1
}
source to share