Sparklyr/Hive:如何正确使用正则表达式(regexp_replace)?

Sparklyr/Hive: how to use regex (regexp_replace) correctly?

考虑以下示例

dataframe_test<- data_frame(mydate = c('2011-03-01T00:00:04.226Z', '2011-03-01T00:00:04.226Z'))

# A tibble: 2 x 1
                    mydate
                     <chr>
1 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z

sdf <- copy_to(sc, dataframe_test, overwrite = TRUE)

> sdf
# Source:   table<dataframe_test> [?? x 1]
# Database: spark_connection
                    mydate
                     <chr>
1 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z

我想修改字符 timestamp 以使其具有更常规的格式。我尝试使用 regexp_replace 这样做,但失败了。

> sdf <- sdf %>% mutate(regex = regexp_replace(mydate, '(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2}).(\d{3})Z', '-- ::.'))
> sdf
# Source:   lazy query [?? x 2]
# Database: spark_connection
                    mydate                    regex
                     <chr>                    <chr>
1 2011-03-01T00:00:04.226Z 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z 2011-03-01T00:00:04.226Z

有什么想法吗?正确的语法是什么?

Spark SQL 和 Hive 提供两种不同的功能:

  • regexp_extract - 它采用字符串、模式和要提取的组的索引。
  • regexp_replace - 它采用字符串、模式和替换字符串。

前者可用于提取具有索引语义being the same as for java.util.regex.Matcher

的单个组

对于regexp_replace模式必须匹配整个字符串,如果没有匹配,则返回输入字符串:

sdf %>% mutate(
 regex = regexp_replace(mydate, '^([0-9]{4}).*', ""),
 regexp_bad = regexp_replace(mydate, '([0-9]{4})', ""))

## Source:   query [2 x 3]
## Database: spark connection master=local[8] app=sparklyr local=TRUE
## 
## # A tibble: 2 x 3
##                     mydate regex               regexp_bad
##                      <chr> <chr>                    <chr>
## 1 2011-03-01T00:00:04.226Z  2011 2011-03-01T00:00:04.226Z
## 2 2011-03-01T00:00:04.226Z  2011 2011-03-01T00:00:04.226Z

regexp_extract 则不需要:

sdf %>% mutate(regex = regexp_extract(mydate, '([0-9]{4})', 1))

## Source:   query [2 x 2]
## Database: spark connection master=local[8] app=sparklyr local=TRUE
## 
## # A tibble: 2 x 2
##                     mydate regex
##                      <chr> <chr>
## 1 2011-03-01T00:00:04.226Z  2011
## 2 2011-03-01T00:00:04.226Z  2011

另外,由于是间接执行(R -> Java),需要转义两次:

sdf %>% mutate(
  regex = regexp_replace(
    mydate, 
    '^(\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{3})Z$',
    '-- ::.'))

通常会使用 Spark 日期时间函数:

spark_session(sc) %>%  
  invoke("sql",
    "SELECT *, DATE_FORMAT(CAST(mydate AS timestamp), 'yyyy-MM-dd HH:mm:ss.SSS') parsed from dataframe_test") %>% 
  sdf_register


## Source:   query [2 x 2]
## Database: spark connection master=local[8] app=sparklyr local=TRUE
## 
## # A tibble: 2 x 2
##                     mydate                  parsed
##                      <chr>                   <chr>
## 1 2011-03-01T00:00:04.226Z 2011-03-01 01:00:04.226
## 2 2011-03-01T00:00:04.226Z 2011-03-01 01:00:04.226

但遗憾的是 sparklyr 在这方面似乎非常有限,并且将时间戳视为字符串。

另见

我在替换“.”时遇到了一些困难。与“”,但最终它适用于:

mutate(myvar2=regexp_replace(myvar, "[.]", ""))