小编mar*_*est的帖子

从 SQL 结果创建熊猫数据框

我正在尝试使用以下代码

mysql = MySQL()
app = Flask(__name__)
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = 'root'
app.config['MYSQL_DATABASE_DB'] = 'compData'
app.config['MYSQL_DATABASE_HOST'] = '0.0.0.0'
mysql.init_app(app)

@app.route("/Authenticate")
def Authenticate():
    cursor = mysql.connect().cursor()
    cursor.execute("SELECT * from abclimit 5")
    pro_info = pd.DataFrame(data=cursor.fetchall(), index=None,columns=[i[0] for i in cursor.description])

    return Response(json.dumps(pro_info),  mimetype='application/json')

if __name__ == "__main__":
    app.run()
Run Code Online (Sandbox Code Playgroud)

但给了我错误

File "pathe\frame.py", line 303, in __init__
    raise PandasError('DataFrame constructor not properly called!')
pandas.core.common.PandasError: DataFrame constructor not properly called!
Run Code Online (Sandbox Code Playgroud)

我想从 sql 查询结果创建熊猫 DF

python mysql sql dataframe pandas

1
推荐指数
1
解决办法
4531
查看次数

ConsoleBuffer' 对象没有属性 'isatty'

我正在 databricks 社区版上使用 Sparkdl 进行图像分类。我添加了所有图书馆的。我已经使用图像数据创建了数据框。

from pyspark.ml.classification import LogisticRegression
from pyspark.ml import Pipeline
from sparkdl import DeepImageFeaturizer 

featurizer = DeepImageFeaturizer(inputCol="image", outputCol="features", modelName="InceptionV3")
lr = LogisticRegression(maxIter=20, regParam=0.05, elasticNetParam=0.3, labelCol="label")
p = Pipeline(stages=[featurizer, lr])

p_model = p.fit(train_df)   




    AttributeError                            Traceback (most recent call last)
<command-2468766328144961> in <module>()
      7 p = Pipeline(stages=[featurizer, lr])
      8 
----> 9 p_model = p.fit(train_df)

/databricks/spark/python/pyspark/ml/base.py in fit(self, dataset, params)
     62                 return self.copy(params)._fit(dataset)
     63             else:
---> 64                 return self._fit(dataset)
     65         else:
     66             raise ValueError("Params must be either a param map …
Run Code Online (Sandbox Code Playgroud)

apache-spark deep-learning databricks

0
推荐指数
1
解决办法
3873
查看次数