我刚刚将我的spark项目从2.2.1升级到2.3.0以找到下面的版本控制例外.我依赖于来自datastax的spark-cassandra-connector.2.0.7和cassandra-driver-core.3.4.0,后者依赖于netty 4.x而spark 2.3.0使用3.9.x.
引发异常的类org.apache.spark.network.util.NettyMemoryMetrics已在spark 2.3.0中引入.
降级我的Cassandra依赖项是绕过异常的唯一方法吗?谢谢!
Exception in thread "main" java.lang.NoSuchMethodError: io.netty.buffer.PooledByteBufAllocator.metric()Lio/netty/buffer/PooledByteBufAllocatorMetric;
at org.apache.spark.network.util.NettyMemoryMetrics.registerMetrics(NettyMemoryMetrics.java:80)
at org.apache.spark.network.util.NettyMemoryMetrics.<init>(NettyMemoryMetrics.java:76)
at org.apache.spark.network.client.TransportClientFactory.<init>(TransportClientFactory.java:109)
at org.apache.spark.network.TransportContext.createClientFactory(TransportContext.java:99)
at org.apache.spark.rpc.netty.NettyRpcEnv.<init>(NettyRpcEnv.scala:71)
at org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:461)
at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:57)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:249)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:423)
Run Code Online (Sandbox Code Playgroud) 我正在尝试透视Spark流数据集(结构化流),但是却得到了一个AnalysisException(以下摘录)。
有人可以确认结构化流(Spark 2.0)中确实不支持数据透视吗,也许建议其他方法?
线程“主”中的异常org.apache.spark.sql.AnalysisException:带流源的查询必须使用writeStream.start();执行;位于org.apache.spark.sql.catalystst的org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker $ .org $ apache $ spark $ sql $ catalyst $ analysis $ UnsupportedOperationChecker $$ throwError(UnsupportedOperationChecker.scala:297)上的kafka .analysis.UnsupportedOperationChecker $$ anonfun $ checkForBatch $ 1.apply(UnsupportedOperationChecker.scala:36)位于org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker $$ anonfun $ checkForBatch $ 1.apply(UnsupportedOorgationChecker。 .apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
我正在尝试通过ERPConnect的ABAP API查询SAP的数据字典.下面的代码检索表名和各种字段属性,但无法显示字段描述.谁知道为什么?
谢谢
REPORT ZSELECTCOMMAND.
TABLES: DD02L,
DD03L,
DD02T, DD04T.
DATA: BEGIN OF tb_meta,
tabname TYPE DD02L-tabname,
fieldname TYPE DD03L-fieldname,
datatype TYPE DD03L-datatype,
leng TYPE DD03L-leng,
decimals TYPE DD03L-decimals,
position TYPE DD03L-position,
desc TYPE DD04T-ddtext,
END OF tb_meta.
DATA utb_meta LIKE STANDARD TABLE OF tb_meta.
DATA: ln_meta LIKE LINE OF utb_meta, m1 TYPE i, m2 TYPE i.
SELECT
tb~tabname
fld~fieldname
fld~datatype fld~leng
fld~decimals fld~position
x~ddtext
INTO CORRESPONDING FIELDS OF TABLE utb_meta
FROM
dd02L AS tb
INNER JOIN dd03L AS fld …Run Code Online (Sandbox Code Playgroud)