小编use*_*180的帖子

ValueError:x和y必须大小相同

import numpy as np
import pandas as pd
import matplotlib.pyplot as pt

data1 = pd.read_csv('stage1_labels.csv')

X = data1.iloc[:, :-1].values
y = data1.iloc[:, 1].values

from sklearn.preprocessing import LabelEncoder, OneHotEncoder
label_X = LabelEncoder()
X[:,0] = label_X.fit_transform(X[:,0])
encoder = OneHotEncoder(categorical_features = [0])
X = encoder.fit_transform(X).toarray()

from sklearn.cross_validation import train_test_split
X_train, X_test, y_train,y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)

#fitting Simple Regression to training set

from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)

#predecting the test set results
y_pred = …
Run Code Online (Sandbox Code Playgroud)

python csv numpy machine-learning matplotlib

6
推荐指数
2
解决办法
3万
查看次数

如何在 React 中的 React.createElement() 中定义 img 标签?

学习 React 时遇到了一个场景,我想在 React.createElement() 中定义一个“img”标签。我尝试过以下语法,但我确信它是错误的方法:

\n
function Greeting() {\n  return (\n    <div>\n      <Person />\n      <Message />\n    </div>\n  );\n}\n\nconst Person = () => {\n  <h2>Its an Image</h2>;\n\n  return React.createElement(\n    "img",\n    {},\n    "https://images-eu.ssl-images-amazon.com/images/I/81l3rZK4lnL._AC_UL200_SR200,200_.jpg"\n  );\n};\n
Run Code Online (Sandbox Code Playgroud)\n

我得到的错误如下:

\n
Error: img is a void element tag and must neither have `children` nor use `dangerouslySetInnerHTML`.\n\xe2\x96\xb6 15 stack frames were collapsed.\nModule.<anonymous>\nE:/REACT_APP/tutorial/src/index.js:43\n40 | return <p>Follow the white rabbit </p>;\n41 | };\n42 | \n> 43 | ReactDom.render(<Greeting />, document.getElementById("root"));\n44 | \n
Run Code Online (Sandbox Code Playgroud)\n

请建议,

\n

reactjs react-dom

4
推荐指数
1
解决办法
5736
查看次数

scala override关键字不起作用

我正在编写一个简单的代码来通过覆盖子类中超类的值来学习Scala中的继承::

class point(xy: Int, ry: Int) {

  var x: Int = xy
  var y: Int = ry

    def move(dx: Int, dy: Int){

    x = x + dx
    y = y + dy

    println (x);
    println (y);

  }
}

class next(override val xy: Int, override val ry: Int, val tet: Int) extends point(xy,ry){ 
    var r: Int = tet

    def move(dx: Int, dy: Int, dz: Int ){

    x = x + dx
    y = y + dy
    r = r + tet …
Run Code Online (Sandbox Code Playgroud)

scala

1
推荐指数
1
解决办法
136
查看次数

执行 python 文件时带有 spark-submit 的 CLI 参数

我正在尝试通过 pyspark 中的以下代码将 sql server 表转换为 .csv 格式。

from pyspark import SparkContext
sc = SparkContext("local", "Simple App")
from pyspark.sql import SQLContext, Row
sqlContext = SQLContext(sc)

    df = sqlContext.read.format("jdbc").option("url","jdbc:sqlserver://server:port").option("databaseName","database").option("driver","com.microsoft.sqlserver.jdbc.SQLServerDriver").option("dbtable","table").option("user","uid").option("password","pwd").load()

    df.registerTempTable("test")
    df.write.format("com.databricks.spark.csv").save("full_path")
Run Code Online (Sandbox Code Playgroud)

所以,如果我想转换多个表,我需要编写多个数据帧。所以,为了避免它,我想在数据帧上迭代时为数据库名称和用户的表名采用命令行参数for 循环。

甚至有可能吗?如果是,有人可以指导我如何通过 spark-submit 进行操作吗?

python apache-spark pyspark spark-submit

0
推荐指数
1
解决办法
2026
查看次数