运行 大型数据集上的 R 脚本时如何防止我的计算机崩溃

How can I prevent my computer from crashing when running R-script on large dataset

通过一张一张地阅读图像,我希望它不会占用太多内存和处理能力,但我可能错过了一些东西。有人可以在下面查看我的代码并提供有关崩溃原因的见解吗?任何提示和建议将不胜感激。抱歉没有给出可重现的例子。

rm(list=ls())

## 1. LOAD PACKAGES
library(magick)
library(purrr)
library(furrr)

## 2. SET MAIN FOLDER
Directory_Folder <- "C:/Users/Nick/Downloads/" 
Folder_Name <- "Photos for Nick"

## 3. SET NEW LOCATION
New_Directory <- "C:/Users/Daikoro/Desktop/"     ## MAKE SURE TO INCLUDE THE FINAL FORWARD SLASH

## 4. LIST ALL FILES
list.of.files <- list.files(path = paste0(Directory_Folder, Folder_Name), full.names = TRUE, recursive = TRUE)

## 5. FUNCTION FOR READING, RESIZING, AND WRITING IMAGES
MyFun <- function(i) {
  
  new.file.name <- gsub(Directory_Folder, New_Directory, i)
  
  magick::image_read(i) %>%  ## IMPORT PHOTOS INTO R
            image_scale("400") %>%  ## IMAGE RE-SCALING
            image_write(path = new.file.name)
}

## 6. SET UP MULTI-CORES
future::plan(multiprocess)

## 7. RUN FUNCTION ON ALL FILES
future_map(list.of.files, MyFun)   ## THIS WILL TAKE A WHILE...AND CRASHES AT 1GB

根据 Ben Bolker、r2evans 和 Waldi 的反馈,我设法让脚本运行起来。我在MyFun的最后一行添加了gc()。并且还指定了这样的核心数:

## SET UP MULTI-CORES
no_cores <- availableCores() - 1
future::plan(multisession, workers = no_cores)

虽然这使脚本变慢了很多,但至少它没有崩溃。我不确定那是因为我有更多的处理核心可用,还是因为 gc() 行。