pyspark throws TypeError: textFile() missing 1 required positional argument: 'name'

Moh*_*med 3 python python-3.x apache-spark rdd pyspark

I googled this problem, yet no direct answer related to spark-2.2.0-bin-hadoop2.7. I am trying to read a text file from local directory, but I always get TypeError that name argument is missing. This is the code in jupyter notebook with Python3:

from pyspark import SparkContext as sc
data = sc.textFile("/home/bigdata/test.txt")
Run Code Online (Sandbox Code Playgroud)

When I run the cell, I get this error:

TypeError                                 Traceback (most recent call last)
  <ipython-input-7-2a326e5b8f8c> in <module>()
  1 from pyspark import SparkContext as sc
  ----> 2 data = sc.textFile("/home/bigdata/test.txt")
  TypeError: textFile() missing 1 required positional argument: 'name'
Run Code Online (Sandbox Code Playgroud)

感谢您的帮助。

ale*_*cxe 6

您正在调用textFile()实例方法

def textFile(self, name, minPartitions=None, use_unicode=True):
Run Code Online (Sandbox Code Playgroud)

就像它是一个静态方法一样,导致将"/home/bigdata/test.txt"字符串用作self值,name而未指定参数,从而导致错误。

创建SparkContext该类的实例:

from pyspark import SparkConf
from pyspark.context import SparkContext

sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
data = sc.textFile("/home/bigdata/test.txt")
Run Code Online (Sandbox Code Playgroud)


kam*_*sar 6

from pyspark import SparkConf
from pyspark.context import SparkContext
sc = SparkContext.getOrCreate(SparkConf())
data = sc.textFile("my_file.txt")
Run Code Online (Sandbox Code Playgroud)

显示一些内容

['这是文本文件,sc 工作正常']