如何使用Python/pyspark运行graphx?

Gle*_*ker 27 python hadoop graph-theory apache-spark

我试图使用pyspark使用Python运行Spark graphx.我的安装看起来是正确的,因为我能够运行pyspark教程和(Java)GraphX教程.大概是因为GraphX是Spark的一部分,pyspark应该能够与它接口,对吗?

以下是pyspark的教程:http ://spark.apache.org/docs/0.9.0/quick-start.html http://spark.apache.org/docs/0.9.0/python-programming-guide. HTML

以下是GraphX的内容:http : //spark.apache.org/docs/0.9.0/graphx-programming-guide.html http://ampcamp.berkeley.edu/big-data-mini-course/graph-分析与- graphx.html

任何人都可以将GraphX教程转换为Python吗?

小智 20

它看起来像Python绑定到GraphX至少滞后星火1.4 1.5 ∞.它正在等待Java API.

您可以追踪SPXT-3789 GRAPHX Python bindings for GraphX - ASF JIRA的状态


zhi*_*ibo 17

您应该查看GraphFrames(https://github.com/graphframes/graphframes),它在DataFrames API下包装GraphX算法,并提供Python接口.

这是一个来自http://graphframes.github.io/quick-start.html的快速示例,稍作修改,以便它可以正常工作

首先启动pyspark并加载graphframes pkg

pyspark --packages graphframes:graphframes:0.1.0-spark1.6

python代码:

from graphframes import *

# Create a Vertex DataFrame with unique ID column "id"
v = sqlContext.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])

# Create an Edge DataFrame with "src" and "dst" columns
e = sqlContext.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()
Run Code Online (Sandbox Code Playgroud)

  • @Ian编辑了一个实例 (3认同)