Sot*_*tos 7 python string-matching apache-spark apache-spark-sql pyspark
我正在使用PySpark处理一个庞大的数据集,我希望根据另一个数据框中的字符串过滤数据帧.例如,
dd = spark.createDataFrame(["something.google.com","something.google.com.somethingelse.ac.uk","something.good.com.cy", "something.good.com.cy.mal.org"], StringType()).toDF('domains')
+----------------------------------------+
|domains |
+----------------------------------------+
|something.google.com |
|something.google.com.somethingelse.ac.uk|
|something.good.com.cy |
|something.good.com.cy.mal.org |
+----------------------------------------+
dd1 = spark.createDataFrame(["google.com", "good.com.cy"], StringType()).toDF('gooddomains')
+-----------+
|gooddomains|
+-----------+
|google.com |
|good.com.cy|
+-----------+
Run Code Online (Sandbox Code Playgroud)
我认为domains和gooddomains有效的域名.
我想要做的是过滤掉dd不结束的匹配字符串dd1.所以在上面的例子中,我想过滤掉第1行和第3行,最后得到
+----------------------------------------+
|domains |
+----------------------------------------+
|something.google.com.somethingelse.ac.uk|
|something.good.com.cy.mal.org |
+----------------------------------------+
Run Code Online (Sandbox Code Playgroud)
我目前的解决方案(如下所示)只能考虑最多3个字的域名.如果我要添加say,verygood.co.ac.ukin dd1(即白名单),那么它将失败.
def split_filter(x, whitelist):
splitted1 = x.select(F.split(x['domains'], '\.').alias('splitted_domains'))
last_two = splitted1.select(F.concat(splitted1.splitted_domains[F.size(splitted1.splitted_domains)-2], \
F.lit('.'), \
splitted1.splitted_domains[F.size(splitted1.splitted_domains)-1]).alias('last_two'))
last_three = splitted1.select(F.concat(splitted1.splitted_domains[F.size(splitted1.splitted_domains)-3], \
F.lit('.'), \
splitted1.splitted_domains[F.size(splitted1.splitted_domains)-2], \
F.lit('.'), \
splitted1.splitted_domains[F.size(splitted1.splitted_domains)-1]).alias('last_three'))
x = x.withColumn('id', F.monotonically_increasing_id())
last_two = last_two.withColumn('id', F.monotonically_increasing_id())
last_three = last_three.withColumn('id', F.monotonically_increasing_id())
final_d = x.join(last_two, ['id']).join(last_three, ['id'])
df1 = final_d.join(whitelist, final_d['last_two'] == whitelist['domains'], how = 'left_anti')
df2 = df1.join(whitelist, df1['last_three'] == whitelist['domains'], how = 'left_anti')
return df2.drop('id')
Run Code Online (Sandbox Code Playgroud)
我使用Spark 2.3.0和Python 2.7.5.
use*_*362 10
让我们扩展domains稍微更好的覆盖范围:
domains = spark.createDataFrame([
"something.google.com", # OK
"something.google.com.somethingelse.ac.uk", # NOT OK
"something.good.com.cy", # OK
"something.good.com.cy.mal.org", # NOT OK
"something.bad.com.cy", # NOT OK
"omgalsogood.com.cy", # NOT OK
"good.com.cy", # OK
"sogood.example.com", # OK Match for shorter redundant, mismatch on longer
"notsoreal.googleecom" # NOT OK
], "string").toDF('domains')
good_domains = spark.createDataFrame([
"google.com", "good.com.cy", "alsogood.com.cy",
"good.example.com", "example.com" # Redundant case
], "string").toDF('gooddomains')
Run Code Online (Sandbox Code Playgroud)
现在...... 只使用Spark SQL原语的简单解决方案是简化当前的方法.既然您已经声明可以安全地假设这些是有效的公共域,我们可以定义这样的函数:
from pyspark.sql.functions import col, regexp_extract
def suffix(c):
return regexp_extract(c, "([^.]+\\.[^.]+$)", 1)
Run Code Online (Sandbox Code Playgroud)
提取顶级域和第一级子域:
domains_with_suffix = (domains
.withColumn("suffix", suffix("domains"))
.alias("domains"))
good_domains_with_suffix = (good_domains
.withColumn("suffix", suffix("gooddomains"))
.alias("good_domains"))
domains_with_suffix.show()
Run Code Online (Sandbox Code Playgroud)
+--------------------+--------------------+
| domains| suffix|
+--------------------+--------------------+
|something.google.com| google.com|
|something.google....| ac.uk|
|something.good.co...| com.cy|
|something.good.co...| mal.org|
|something.bad.com.cy| com.cy|
| omgalsogood.com.cy| com.cy|
| good.com.cy| com.cy|
| sogood.example.com| example.com|
|notsoreal.googleecom|notsoreal.googleecom|
+--------------------+--------------------+
Run Code Online (Sandbox Code Playgroud)
现在我们可以外连接:
from pyspark.sql.functions import (
col, concat, lit, monotonically_increasing_id, sum as sum_
)
candidates = (domains_with_suffix
.join(
good_domains_with_suffix,
col("domains.suffix") == col("good_domains.suffix"),
"left"))
Run Code Online (Sandbox Code Playgroud)
并过滤结果:
is_good_expr = (
col("good_domains.suffix").isNotNull() & # Match on suffix
(
# Exact match
(col("domains") == col("gooddomains")) |
# Subdomain match
col("domains").endswith(concat(lit("."), col("gooddomains")))
)
)
not_good_domains = (candidates
.groupBy("domains") # .groupBy("suffix", "domains") - see the discussion
.agg((sum_(is_good_expr.cast("integer")) > 0).alias("any_good"))
.filter(~col("any_good"))
.drop("any_good"))
not_good_domains.show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
+----------------------------------------+
|domains |
+----------------------------------------+
|omgalsogood.com.cy |
|notsoreal.googleecom |
|something.good.com.cy.mal.org |
|something.google.com.somethingelse.ac.uk|
|something.bad.com.cy |
+----------------------------------------+
Run Code Online (Sandbox Code Playgroud)
这比直接连接所需LIKE的笛卡尔积更好,但对蛮力不满意,在最坏的情况下需要两次洗牌 - 一次用于join(如果good_domains小到足够可以跳过broadcasted),另一次用于group_by+ agg.
不幸的是,Spark SQL不允许自定义分区器只为两者使用一次shuffle(但是在RDD API中可能使用复合键)并且优化器还不够智能,以进行优化join(_, "key1")和.groupBy("key1", _).
如果你能接受一些假阴性,你可以去概率化.首先让我们建立概率计数器(这里使用bounter小帮助toolz)
from pyspark.sql.functions import concat_ws, reverse, split
from bounter import bounter
from toolz.curried import identity, partition_all
# This is only for testing on toy examples, in practice use more realistic value
size_mb = 20
chunk_size = 100
def reverse_domain(c):
return concat_ws(".", reverse(split(c, "\\.")))
def merge(acc, xs):
acc.update(xs)
return acc
counter = sc.broadcast((good_domains
.select(reverse_domain("gooddomains"))
.rdd.flatMap(identity)
# Chunk data into groups so we reduce the number of update calls
.mapPartitions(partition_all(chunk_size))
# Use tree aggregate to reduce pressure on the driver,
# when number of partitions is large*
# You can use depth parameter for further tuning
.treeAggregate(bounter(need_iteration=False, size_mb=size_mb), merge, merge)))
Run Code Online (Sandbox Code Playgroud)
接下来定义这样的用户定义函数函数
from pyspark.sql.functions import pandas_udf, PandasUDFType
from toolz import accumulate
def is_good_counter(counter):
def is_good_(x):
return any(
x in counter.value
for x in accumulate(lambda x, y: "{}.{}".format(x, y), x.split("."))
)
@pandas_udf("boolean", PandasUDFType.SCALAR)
def _(xs):
return xs.apply(is_good_)
return _
Run Code Online (Sandbox Code Playgroud)
并过滤domains:
domains.filter(
~is_good_counter(counter)(reverse_domain("domains"))
).show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
+----------------------------------------+
|domains |
+----------------------------------------+
|something.google.com.somethingelse.ac.uk|
|something.good.com.cy.mal.org |
|something.bad.com.cy |
|omgalsogood.com.cy |
|notsoreal.googleecom |
+----------------------------------------+
Run Code Online (Sandbox Code Playgroud)
在Scala中,这可以完成bloomFilter
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
import org.apache.spark.util.sketch.BloomFilter
def reverseDomain(c: Column) = concat_ws(".", reverse(split(c, "\\.")))
val checker = good_domains.stat.bloomFilter(
// Adjust values depending on the data
reverseDomain($"gooddomains"), 1000, 0.001
)
def isGood(checker: BloomFilter) = udf((s: String) =>
s.split('.').toStream.scanLeft("") {
case ("", x) => x
case (acc, x) => s"${acc}.${x}"
}.tail.exists(checker mightContain _))
domains.filter(!isGood(checker)(reverseDomain($"domains"))).show(false)
Run Code Online (Sandbox Code Playgroud)
+----------------------------------------+
|domains |
+----------------------------------------+
|something.google.com.somethingelse.ac.uk|
|something.good.com.cy.mal.org |
|something.bad.com.cy |
|omgalsogood.com.cy |
|notsoreal.googleecom |
+----------------------------------------+
Run Code Online (Sandbox Code Playgroud)
如果需要,不应该很难从Python调用这样的代码.
由于近似性质,这可能仍然不完全令人满意.如果您需要精确的结果,可以尝试利用数据的冗余特性,例如使用trie(此处使用datrie实现).
如果good_domains相对较小,您可以使用与概率变量类似的方式创建单个模型:
import string
import datrie
def seq_op(acc, x):
acc[x] = True
return acc
def comb_op(acc1, acc2):
acc1.update(acc2)
return acc1
trie = sc.broadcast((good_domains
.select(reverse_domain("gooddomains"))
.rdd.flatMap(identity)
# string.printable is a bit excessive if you need standard domain
# and not enough if you allow internationalized domain names.
# In the latter case you'll have to adjust the `alphabet`
# or use different implementation of trie.
.treeAggregate(datrie.Trie(string.printable), seq_op, comb_op)))
Run Code Online (Sandbox Code Playgroud)
定义用户定义的函数:
def is_good_trie(trie):
def is_good_(x):
if not x:
return False
else:
return any(
x == match or x[len(match)] == "."
for match in trie.value.iter_prefixes(x)
)
@pandas_udf("boolean", PandasUDFType.SCALAR)
def _(xs):
return xs.apply(is_good_)
return _
Run Code Online (Sandbox Code Playgroud)
并将其应用于数据:
domains.filter(
~is_good_trie(trie)(reverse_domain("domains"))
).show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
+----------------------------------------+
|domains |
+----------------------------------------+
|something.google.com.somethingelse.ac.uk|
|something.good.com.cy.mal.org |
|something.bad.com.cy |
|omgalsogood.com.cy |
|notsoreal.googleecom |
+----------------------------------------+
Run Code Online (Sandbox Code Playgroud)
这种特定的方法在假设所有good_domains可以压缩成单个线索的情况下工作,但是可以容易地扩展以处理不满足该假设的情况.例如,您可以为每个顶级域或后缀构建一个trie(如天真解决方案中所定义)
(good_domains
.select(suffix("gooddomains"), reverse_domain("gooddomains"))
.rdd
.aggregateByKey(datrie.Trie(string.printable), seq_op, comb_op))
Run Code Online (Sandbox Code Playgroud)
然后,根据需要从序列化版本加载模型,或使用RDD操作.
可以根据数据,业务要求(如近似解决方案中的假负容差)和可用资源(驱动程序内存,执行程序内存,基数suffixes,访问分布式POSIX兼容的分布式文件系统)进一步调整这两种非本机方法, 等等).在应用这些DataFrames和RDDs(内存使用,通信和序列化开销)之间进行选择时,还需要考虑一些权衡因素.
| 归档时间: |
|
| 查看次数: |
733 次 |
| 最近记录: |