如何控制RDD分区的首选位置?

sam*_*m93 5 apache-spark rdd pyspark

有没有办法手动设置RDD分区的首选位置?我想确保某些分区是在特定计算机上计算的。

我正在使用一个数组和'Parallelize'方法从中创建一个RDD。

另外我不使用HDFS,文件在本地磁盘上。这就是为什么我要修改执行节点。

Jac*_*ski 7

有没有办法手动设置RDD分区的preferredLocations?

Yes, there is, but it's RDD-specific and so different kinds of RDDs have different ways to do it.

Spark uses RDD.preferredLocations to get a list of preferred locations to compute each partition/split on (e.g. block locations for an HDFS file).

final def preferredLocations(split: Partition): Seq[String]

Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.

As you see the method is final which means that no one can ever override it.

When you look at the source code of RDD.preferredLocations you will see how a RDD knows its preferred locations. It is using the protected RDD.getPreferredLocations method that a custom RDD may (but don't have to) override to specify placement preferences.

protected def getPreferredLocations(split: Partition): Seq[String] = Nil
Run Code Online (Sandbox Code Playgroud)

So, now the question has "morphed" into another about what are the RDDs that allow for setting their preferred locations. Find yours and see the source code.

I'm using an array and the 'Parallelize' method to create a RDD from that.

If you parallelize your local dataset it's no longer distributed and can be such, but...why would you want to use Spark for something you can process locally on a single computer/node?

If however you insist and do really want to use Spark for local datasets, the RDD behind SparkContext.parallelize is...let's have a look at the source code... ParallelCollectionRDD which does allow for location preferences.

Let's then rephrase your question to the following (hoping I won't lose any important fact):

What are the operators that allow for creating a ParallelCollectionRDD and specifying the location preferences explicitly?

To my great surprise (as I didn't know about the feature), there is such an operator, i.e. SparkContext.makeRDD, that...accepts one or more location preferences (hostnames of Spark nodes) for each object.

makeRDD[T](seq: Seq[(T, Seq[String])]): RDD[T] Distribute a local Scala collection to form an RDD, with one or more location preferences (hostnames of Spark nodes) for each object. Create a new partition for each collection item.

In other words, rather than using parallelise you have to use makeRDD (which is available in Spark Core API for Scala, but am not sure about Python that I'm leaving as a home exercise for you :))

我将其应用于创建某种RDD的任何其他RDD运算符/转换的相同理由。