Jak*_*uss 5 r jdbc apache-spark sparklyr
默认情况下,spark_read_jdbc()将整个数据库表读入Spark。我使用以下语法创建这些连接。
library(sparklyr)
library(dplyr)
config <- spark_config()
config$`sparklyr.shell.driver-class-path` <- "mysql-connector-java-5.1.43/mysql-connector-java-5.1.43-bin.jar"
sc <- spark_connect(master = "local",
version = "1.6.0",
hadoop_version = 2.4,
config = config)
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "table_name"))
Run Code Online (Sandbox Code Playgroud)
但是,我现在遇到了这样的情况:我在MySQL数据库中有一个表,我希望只将该表的一部分读入Spark。
我如何spark_read_jdbc接受谓词?我尝试将谓词添加到选项列表中没有成功,
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "table_name",
predicates = "field > 1"))
Run Code Online (Sandbox Code Playgroud)
您可以dbtable用查询替换:
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "(SELECT * FROM table_name WHERE field > 1) as my_query"))
Run Code Online (Sandbox Code Playgroud)
但是在这种简单情况下,Spark应该在过滤时自动将其推送:
db_tbl %>% filter(field > 1)
Run Code Online (Sandbox Code Playgroud)
只需确保设置:
memory = FALSE
Run Code Online (Sandbox Code Playgroud)
在中spark_read_jdbc。