如何使用 rowwise 进行并行处理
How to do parallel processing with rowwise
我正在使用 rowwise
对每一行执行一个函数。这需要很长时间。为了加快速度,有没有办法使用并行处理,以便多个内核同时处理不同的行?
例如,我正在汇总 PRISM 天气数据 (https://prism.oregonstate.edu/) to the state level while weighting by population. This is based on https://www.patrickbaylis.com/blog/2021-08-15-pop-weighted-weather/。
请注意,以下代码需要下载每日天气数据以及包含非常小地理区域人口估计的 shapefile。
library(prism)
library(tidyverse)
library(sf)
library(exactextractr)
library(tigris)
library(terra)
library(raster)
library(ggthemes)
################################################################################
#get daily PRISM data
prism_set_dl_dir("/prism/daily/")
get_prism_dailys(type = "tmean", minDate = "2012-01-01", maxDate = "2021-07-31", keepZip=FALSE)
Get states shape file and limit to lower 48
states = tigris::states(cb = TRUE, resolution = "20m") %>%
filter(!NAME %in% c("Alaska", "Hawaii", "Puerto Rico"))
setwd("/prism/daily")
################################################################################
#get list of files in the directory, and extract date
##see if it is stable (TRUE) or provisional data (FALSE)
list <- ls_prism_data(name=TRUE) %>% mutate(date1=substr(files, nchar(files)-11, nchar(files)-4),
date2=substr(product_name, 1, 11),
year = substr(date2, 8, 11), month=substr(date2, 1, 3),
month2=substr(date1, 5, 6), day=substr(date2, 5, 6),
stable = str_detect(files, "stable"))
################################################################################
#function to get population weighted weather by state
#run the population raster outside of the loop
# SOURCE: https://sedac.ciesin.columbia.edu/data/set/usgrid-summary-file1-2000/data-download - Census 2000, population counts for continental US
pop_rast = raster("/population/usgrid_data_2000/geotiff/uspop00.tif")
pop_crop = crop(pop_rast, states)
states = tigris::states(cb = TRUE, resolution = "20m") %>%
filter(!NAME %in% c("Alaska", "Hawaii", "Puerto Rico"))
daily_weather <- function(varname, filename, date) {
weather_rast = raster(paste0(filename, "/", filename, ".bil"))
weather_crop = crop(weather_rast, states)
pop_rs = raster::resample(pop_crop, weather_crop)
states$value <- exact_extract(weather_crop, states, fun = "weighted_mean", weights=pop_rs)
names(states)[11] <- varname
states <- data.frame(states) %>% arrange(NAME) %>% dplyr::select(c(6,11))
states
}
################################################################################
days <- list %>% rowwise() %>% mutate(states = list(daily_weather("tmean", files, date1))))
照原样,每行大约需要 7 秒。这加起来有 3500 行。我想获得 tmean 之外的其他变量。所以除非我能加快速度,否则一切都需要一天或更长时间。
我主要对能够使用 rowwise 并行处理的解决方案感兴趣,但我也欢迎其他有关如何以其他方式加速代码的建议。
您可以尝试 purrr
或它的多处理等价物 furrr
(map()
或 pmap()
)。最快的方法是使用 data.table
。请参阅 this 博客 post,其中提供了我的建议背后的一些基准
我正在使用 rowwise
对每一行执行一个函数。这需要很长时间。为了加快速度,有没有办法使用并行处理,以便多个内核同时处理不同的行?
例如,我正在汇总 PRISM 天气数据 (https://prism.oregonstate.edu/) to the state level while weighting by population. This is based on https://www.patrickbaylis.com/blog/2021-08-15-pop-weighted-weather/。
请注意,以下代码需要下载每日天气数据以及包含非常小地理区域人口估计的 shapefile。
library(prism)
library(tidyverse)
library(sf)
library(exactextractr)
library(tigris)
library(terra)
library(raster)
library(ggthemes)
################################################################################
#get daily PRISM data
prism_set_dl_dir("/prism/daily/")
get_prism_dailys(type = "tmean", minDate = "2012-01-01", maxDate = "2021-07-31", keepZip=FALSE)
Get states shape file and limit to lower 48
states = tigris::states(cb = TRUE, resolution = "20m") %>%
filter(!NAME %in% c("Alaska", "Hawaii", "Puerto Rico"))
setwd("/prism/daily")
################################################################################
#get list of files in the directory, and extract date
##see if it is stable (TRUE) or provisional data (FALSE)
list <- ls_prism_data(name=TRUE) %>% mutate(date1=substr(files, nchar(files)-11, nchar(files)-4),
date2=substr(product_name, 1, 11),
year = substr(date2, 8, 11), month=substr(date2, 1, 3),
month2=substr(date1, 5, 6), day=substr(date2, 5, 6),
stable = str_detect(files, "stable"))
################################################################################
#function to get population weighted weather by state
#run the population raster outside of the loop
# SOURCE: https://sedac.ciesin.columbia.edu/data/set/usgrid-summary-file1-2000/data-download - Census 2000, population counts for continental US
pop_rast = raster("/population/usgrid_data_2000/geotiff/uspop00.tif")
pop_crop = crop(pop_rast, states)
states = tigris::states(cb = TRUE, resolution = "20m") %>%
filter(!NAME %in% c("Alaska", "Hawaii", "Puerto Rico"))
daily_weather <- function(varname, filename, date) {
weather_rast = raster(paste0(filename, "/", filename, ".bil"))
weather_crop = crop(weather_rast, states)
pop_rs = raster::resample(pop_crop, weather_crop)
states$value <- exact_extract(weather_crop, states, fun = "weighted_mean", weights=pop_rs)
names(states)[11] <- varname
states <- data.frame(states) %>% arrange(NAME) %>% dplyr::select(c(6,11))
states
}
################################################################################
days <- list %>% rowwise() %>% mutate(states = list(daily_weather("tmean", files, date1))))
照原样,每行大约需要 7 秒。这加起来有 3500 行。我想获得 tmean 之外的其他变量。所以除非我能加快速度,否则一切都需要一天或更长时间。
我主要对能够使用 rowwise 并行处理的解决方案感兴趣,但我也欢迎其他有关如何以其他方式加速代码的建议。
您可以尝试 purrr
或它的多处理等价物 furrr
(map()
或 pmap()
)。最快的方法是使用 data.table
。请参阅 this 博客 post,其中提供了我的建议背后的一些基准