小编Ale*_*oux的帖子

C - 对"sqrt"的未定义引用,即使使用'-lm'

我尝试在C中编译一个需要"math.h"的库,这里是.c文件的开头:

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h> 
#include "sparse_matrix.h"
...
Run Code Online (Sandbox Code Playgroud)

我用这个命令编译:

gcc -c ./sparse_matrix.c -o sparse_matrix.o -lm -Wall -pedantic -std=c99 -g -O
Run Code Online (Sandbox Code Playgroud)

但即使文件后面的#include完成和标志-lm(我已尝试在行尾但没有改变)我仍然得到错误: undefined reference to « sqrt » collect2: error: ld returned 1 exit status

谷歌搜索一小时后,我没有得到它.我在ubuntu 14.10(utopic unicorn)下使用gcc 4.9.提前感谢您的帮助!

c gcc math.h ld

10
推荐指数
1
解决办法
1万
查看次数

Spark 提交到 kubernetes:执行程序未拉取包

我正在尝试使用 Spark-submit 将我的 Pyspark 应用程序提交到 Kubernetes 集群 (Minikube):

./bin/spark-submit \
   --master k8s://https://192.168.64.4:8443 \
   --deploy-mode cluster \
   --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 \
   --conf spark.kubernetes.container.image='pyspark:dev' \
   --conf spark.kubernetes.container.image.pullPolicy='Never' \
   local:///main.py
Run Code Online (Sandbox Code Playgroud)

应用程序尝试访问部署在集群内的 Kafka 实例,因此我指定了 jar 依赖项:

--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1
Run Code Online (Sandbox Code Playgroud)

我正在使用的容器映像基于我使用实用程序脚本构建的容器映像。我已经将我的应用程序所需的所有 python 依赖项打包在其中。

驱动程序正确部署并获取 Kafka 包(如果需要,我可以提供日志)并在新的 pod 中启动执行器。

但随后执行器 Pod 崩溃了:

ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.ClassNotFoundException: org.apache.spark.sql.kafka010.KafkaBatchInputPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1850)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2160)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405) …
Run Code Online (Sandbox Code Playgroud)

python apache-spark kubernetes pyspark spark-submit

7
推荐指数
1
解决办法
2060
查看次数

Evaluator 组件上的 TFX IndexError

我正在尝试为我的模型制作一个评估器。到目前为止,所有其他组件都很好,但是当我尝试此配置时:

eval_config = tfma.EvalConfig(
    model_specs=[
        tfma.ModelSpec(label_key='Category'),
    ],
    metrics_specs=tfma.metrics.default_multi_class_classification_specs(),
    slicing_specs=[
        tfma.SlicingSpec(),
        tfma.SlicingSpec(feature_keys=['Category'])
    ])
Run Code Online (Sandbox Code Playgroud)

使这个评估器:

model_resolver = ResolverNode(
      instance_name='latest_blessed_model_resolver',
      resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
      model=Channel(type=Model),
      model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)

evaluator = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer.outputs['model'],
    baseline_model=model_resolver.outputs['model'],
    eval_config=eval_config)
context.run(evaluator)
Run Code Online (Sandbox Code Playgroud)

我明白了:

[...]
IndexError                                Traceback (most recent call last)
/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/common.cpython-37m-darwin.so in apache_beam.runners.common.DoFnRunner.process()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/common.cpython-37m-darwin.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/common.cpython-37m-darwin.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/common.cpython-37m-darwin.so in apache_beam.runners.common._OutputProcessor.process_outputs()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/worker/operations.cpython-37m-darwin.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/worker/operations.cpython-37m-darwin.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/apache_beam/runners/worker/operations.cpython-37m-darwin.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()

/opt/miniconda3/envs/archiving/lib/python3.7/site-packages/tensorflow_model_analysis/evaluators/metrics_and_plots_evaluator_v2.py in add_input(self, accumulator, element)
    355     for i, (c, a) in enumerate(zip(self._combiners, accumulator)):
--> 356       result = c.add_input(a, get_combiner_input(elements[0], i))
    357       for …
Run Code Online (Sandbox Code Playgroud)

python tensorflow tfx

5
推荐指数
1
解决办法
191
查看次数

标签 统计

python ×2

apache-spark ×1

c ×1

gcc ×1

kubernetes ×1

ld ×1

math.h ×1

pyspark ×1

spark-submit ×1

tensorflow ×1

tfx ×1