EMA 函数适用于 R 数据帧,但在 spark 数据帧上失败 - Sparklyr
EMA function works on R dataframe, but fails on spark dataframe - Sparklyr
我对 R 和 Spark 都比较陌生。
我正在编写一个函数来计算一组数据的指数移动平均值。我正在使用 sparklyr 包在 Databricks Spark 平台上工作。
我编写了一个适用于普通 R 数据帧的函数。但是,应用于 Spark 数据帧时失败。
我目前对值的正确性不感兴趣(我使用的是虚拟值 - 例如,init = 10 是任意的)。我更感兴趣的是让它在 Spark 数据帧上工作
library(sparklyr)
library(dplyr)
library(stats)
sc <- spark_connect(method = "databricks")
set.seed(21)
#data
x <- rnorm(1e4)
#data in a dataframe
x_df <- data.frame(x)
#data in a Spark dataframe
x_sprk <- copy_to(sc, x_df, name ="x_sql", overwrite = TRUE)
#function to calculate Exponential moving average
ewma_filter <- function (df, ratio = 0.9) {
mutate(df, ema = c(stats::filter(x * ratio, 1 - ratio, "recursive", init = 10)))
}
当我 运行 在 R 数据帧上使用此函数时,它工作正常
y_df <- x_df %>% ewma_filter()
输出:
x ema
1 0.6785634656 1.6107071191
2 -0.8519017349 -0.6056408495
3 -0.0362643838 -0.0932020304
4 0.2422350575 0.2086913487
5 -1.0401144499 -0.9152338701
6 1.4521621543 1.2154225519
7 -0.8531140006 -0.6462603453
8 0.4779933902 0.3655680167
9 1.0719294487 1.0012933055
10 -0.4115495580 -0.2702652716
11 2.4152301588 2.1466806157
12 -0.1045401223 0.1205819515
13 -0.1632591646 -0.1348750530
14 -2.1441820131 -1.9432513170
15 0.4672471535 0.2261973065
16 0.9362099384 0.8652086752
17 0.6494043831 0.6709848123
18 2.5609202716 2.3719267257
但是当我在 Spark 数据帧上尝试时,我没有得到预期的输出:
y_sprk <- x_sprk %>% ewma_filter()
输出:
x ema
1 0.679
2 -0.852
3 -0.0363
4 0.242
5 -1.04
6 1.45
7 -0.853
8 0.478
9 1.07
10 -0.412
# … with more rows
我尝试使用 spark_apply():
y_sprk <- spark_apply(x_sprk, ewma_filter, columns = list(x = "numeric", ema = "numeric"))
我收到以下错误:
Error : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 115.0 failed 4 times, most recent failure: Lost task 0.3 in stage 115.0 (TID 8623, 10.139.64.6, executor 0): java.lang.Exception: sparklyr worker rscript failure with status 255, check worker logs for details.
at sparklyr.Rscript.init(rscript.scala:106)
at sparklyr.WorkerApply$$anon.run(workerapply.scala:116)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2355)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:2343)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:2342)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2342)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:1096)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:1096)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1096)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2574)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2510)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2240)
at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:270)
at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:280)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:80)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:86)
at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:55)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectResult(Dataset.scala:2828)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3440)
at org.apache.spark.sql.Dataset$$anonfun$collect.apply(Dataset.scala:2795)
at org.apache.spark.sql.Dataset$$anonfun$collect.apply(Dataset.scala:2795)
at org.apache.spark.sql.Dataset$$anonfun.apply(Dataset.scala:3424)
at org.apache.spark.sql.Dataset$$anonfun.apply(Dataset.scala:3419)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv.apply(SQLExecution.scala:99)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:228)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:85)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:158)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3419)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2795)
at sparklyr.Utils$.collect(utils.scala:204)
at sparklyr.Utils.collect(utils.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:139)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: sparklyr worker rscript failure with status 255, check worker logs for details.
at sparklyr.Rscript.init(rscript.scala:106)
at sparklyr.WorkerApply$$anon.run(workerapply.scala:116)
如果您能帮助调试它并使其在 spark 数据帧上运行,我将不胜感激。
你很亲近! spark_apply()
默认情况下适用于 Spark DataFrame 的每个分区,这非常适合您尝试执行的操作。您收到的错误消息并没有告诉您太多 - 要真正了解发生了什么,您实际上必须在 stdout
中查看工作节点的日志。在 Databricks 上,您可以在 'Spark UI - Master' 下的集群 UI 中找到它,然后深入到工作节点。
对于您的代码,错误消息实际上是 19/11/10 19:38:25 ERROR sparklyr: RScript (2719) terminated unexpectedly: could not find function "mutate"
找不到 mutate
可能看起来很奇怪,但这些 UDF 的工作方式是在工作节点上创建一个 R 进程,并且为了使函数工作所有 code/libraries 也需要在这些节点上可用。由于您在 Databricks 上 运行 并且包含在 Databricks Runtime 中 dplyr
,因此它在所有工作节点上都可用。您只需要引用命名空间或加载完整的库:
library(sparklyr)
library(dplyr)
library(stats)
sc <- spark_connect(method = "databricks")
# Create R dataframe
set.seed(21)
x <- rnorm(1e4)
x_df <- data.frame(x)
# Push R dataframe to Spark
x_sprk <- copy_to(sc, x_df, name ="x_sql", overwrite = TRUE)
# Distribute the R code across each partition
spark_apply(x_sprk, function(x) {
# Define moving average function and reference dplyr explicitly
ewma_filter <- function (df, ratio = 0.9) {
dplyr::mutate(df, ema = c(stats::filter(x * ratio, 1 - ratio, "recursive", init = 10)))
}
# Apply it to each partition of the Spark DF
ewma_filter(x)
})
这些是调用 spark_apply()
的结果:
# Source: spark<?> [?? x 2]
x ema
<dbl> <dbl>
1 0.793 1.71
2 0.522 0.641
3 1.75 1.64
4 -1.27 -0.981
5 2.20 1.88
6 0.433 0.578
7 -1.57 -1.36
8 -0.935 -0.977
9 0.0635 -0.0406
10 -0.00239 -0.00621
# … with more rows
我对 R 和 Spark 都比较陌生。 我正在编写一个函数来计算一组数据的指数移动平均值。我正在使用 sparklyr 包在 Databricks Spark 平台上工作。
我编写了一个适用于普通 R 数据帧的函数。但是,应用于 Spark 数据帧时失败。
我目前对值的正确性不感兴趣(我使用的是虚拟值 - 例如,init = 10 是任意的)。我更感兴趣的是让它在 Spark 数据帧上工作
library(sparklyr)
library(dplyr)
library(stats)
sc <- spark_connect(method = "databricks")
set.seed(21)
#data
x <- rnorm(1e4)
#data in a dataframe
x_df <- data.frame(x)
#data in a Spark dataframe
x_sprk <- copy_to(sc, x_df, name ="x_sql", overwrite = TRUE)
#function to calculate Exponential moving average
ewma_filter <- function (df, ratio = 0.9) {
mutate(df, ema = c(stats::filter(x * ratio, 1 - ratio, "recursive", init = 10)))
}
当我 运行 在 R 数据帧上使用此函数时,它工作正常
y_df <- x_df %>% ewma_filter()
输出:
x ema
1 0.6785634656 1.6107071191
2 -0.8519017349 -0.6056408495
3 -0.0362643838 -0.0932020304
4 0.2422350575 0.2086913487
5 -1.0401144499 -0.9152338701
6 1.4521621543 1.2154225519
7 -0.8531140006 -0.6462603453
8 0.4779933902 0.3655680167
9 1.0719294487 1.0012933055
10 -0.4115495580 -0.2702652716
11 2.4152301588 2.1466806157
12 -0.1045401223 0.1205819515
13 -0.1632591646 -0.1348750530
14 -2.1441820131 -1.9432513170
15 0.4672471535 0.2261973065
16 0.9362099384 0.8652086752
17 0.6494043831 0.6709848123
18 2.5609202716 2.3719267257
但是当我在 Spark 数据帧上尝试时,我没有得到预期的输出:
y_sprk <- x_sprk %>% ewma_filter()
输出:
x ema
1 0.679
2 -0.852
3 -0.0363
4 0.242
5 -1.04
6 1.45
7 -0.853
8 0.478
9 1.07
10 -0.412
# … with more rows
我尝试使用 spark_apply():
y_sprk <- spark_apply(x_sprk, ewma_filter, columns = list(x = "numeric", ema = "numeric"))
我收到以下错误:
Error : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 115.0 failed 4 times, most recent failure: Lost task 0.3 in stage 115.0 (TID 8623, 10.139.64.6, executor 0): java.lang.Exception: sparklyr worker rscript failure with status 255, check worker logs for details.
at sparklyr.Rscript.init(rscript.scala:106)
at sparklyr.WorkerApply$$anon.run(workerapply.scala:116)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2355)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:2343)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:2342)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2342)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:1096)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:1096)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1096)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2574)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2510)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2240)
at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:270)
at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:280)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:80)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:86)
at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:55)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectResult(Dataset.scala:2828)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3440)
at org.apache.spark.sql.Dataset$$anonfun$collect.apply(Dataset.scala:2795)
at org.apache.spark.sql.Dataset$$anonfun$collect.apply(Dataset.scala:2795)
at org.apache.spark.sql.Dataset$$anonfun.apply(Dataset.scala:3424)
at org.apache.spark.sql.Dataset$$anonfun.apply(Dataset.scala:3419)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv.apply(SQLExecution.scala:99)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:228)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:85)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:158)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3419)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2795)
at sparklyr.Utils$.collect(utils.scala:204)
at sparklyr.Utils.collect(utils.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:139)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: sparklyr worker rscript failure with status 255, check worker logs for details.
at sparklyr.Rscript.init(rscript.scala:106)
at sparklyr.WorkerApply$$anon.run(workerapply.scala:116)
如果您能帮助调试它并使其在 spark 数据帧上运行,我将不胜感激。
你很亲近! spark_apply()
默认情况下适用于 Spark DataFrame 的每个分区,这非常适合您尝试执行的操作。您收到的错误消息并没有告诉您太多 - 要真正了解发生了什么,您实际上必须在 stdout
中查看工作节点的日志。在 Databricks 上,您可以在 'Spark UI - Master' 下的集群 UI 中找到它,然后深入到工作节点。
对于您的代码,错误消息实际上是 19/11/10 19:38:25 ERROR sparklyr: RScript (2719) terminated unexpectedly: could not find function "mutate"
找不到 mutate
可能看起来很奇怪,但这些 UDF 的工作方式是在工作节点上创建一个 R 进程,并且为了使函数工作所有 code/libraries 也需要在这些节点上可用。由于您在 Databricks 上 运行 并且包含在 Databricks Runtime 中 dplyr
,因此它在所有工作节点上都可用。您只需要引用命名空间或加载完整的库:
library(sparklyr)
library(dplyr)
library(stats)
sc <- spark_connect(method = "databricks")
# Create R dataframe
set.seed(21)
x <- rnorm(1e4)
x_df <- data.frame(x)
# Push R dataframe to Spark
x_sprk <- copy_to(sc, x_df, name ="x_sql", overwrite = TRUE)
# Distribute the R code across each partition
spark_apply(x_sprk, function(x) {
# Define moving average function and reference dplyr explicitly
ewma_filter <- function (df, ratio = 0.9) {
dplyr::mutate(df, ema = c(stats::filter(x * ratio, 1 - ratio, "recursive", init = 10)))
}
# Apply it to each partition of the Spark DF
ewma_filter(x)
})
这些是调用 spark_apply()
的结果:
# Source: spark<?> [?? x 2]
x ema
<dbl> <dbl>
1 0.793 1.71
2 0.522 0.641
3 1.75 1.64
4 -1.27 -0.981
5 2.20 1.88
6 0.433 0.578
7 -1.57 -1.36
8 -0.935 -0.977
9 0.0635 -0.0406
10 -0.00239 -0.00621
# … with more rows