我正在做 LSTM 时间序列预测。我的数据看起来像这样
IDTime : 每天的整数
时间部分:0 = 夜间,1 = 早上,2 = 下午
以及我试图预测的 4 列值
我有 2686 个值,每天 3 个值,所以总共有大约 900 个值 + 添加了新的缺失值
我阅读并做了类似https://www.tensorflow.org/tutorials/structured_data/time_series 的事情
features_considered = ['TimePart', 'NmbrServices', 'LoggedInTimeMinutes','NmbrPersons', 'NmbrOfEmployees']
features = data[features_considered]
features.index = data.index
Run Code Online (Sandbox Code Playgroud)
currentFeatureIndex= 1,即 NmbServices currentFeatureIndex = 1
TRAIN_SPLIT = …Run Code Online (Sandbox Code Playgroud) 我使用了 MLflow 并使用下面的函数(来自 pydataberlin)记录了参数。
def train(alpha=0.5, l1_ratio=0.5):
# train a model with given parameters
warnings.filterwarnings("ignore")
np.random.seed(40)
# Read the wine-quality csv file (make sure you're running this from the root of MLflow!)
data_path = "data/wine-quality.csv"
train_x, train_y, test_x, test_y = load_data(data_path)
# Useful for multiple runs (only doing one run in this sample notebook)
with mlflow.start_run():
# Execute ElasticNet
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
# Evaluate Metrics
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
# Print …Run Code Online (Sandbox Code Playgroud) 您好,我已经解压并导出了 Spark 路径。当我吃午饭时,我收到了这个错误。
export PATH=$PATH:/usr/local/spark/spark24/bin
Run Code Online (Sandbox Code Playgroud)
$ 火花外壳
错误
Traceback (most recent call last):
File "/usr/local/bin/find_spark_home.py", line 74, in <module>
print(_find_spark_home())
File "/usr/local/bin/find_spark_home.py", line 56, in _find_spark_home
module_home = os.path.dirname(find_spec("pyspark").origin)
AttributeError: 'NoneType' object has no attribute 'origin'
/usr/local/bin/spark-shell: line 57: /bin/spark-submit: No such file or directory
Run Code Online (Sandbox Code Playgroud)
我的问题是什么。
我使用 pyspark 的方法是始终在 jupyter 中运行下面的代码。这种方法总是必要的吗?
import findspark
findspark.init('/opt/spark2.4')
import pyspark
sc = pyspark.SparkContext()
Run Code Online (Sandbox Code Playgroud) python ×3
pyspark ×2
apache-spark ×1
artifacts ×1
keras ×1
lstm ×1
mlflow ×1
spark-shell ×1
tensorflow ×1