我是java的新手,很抱歉,如果这是一个明显的问题.
我正在尝试逐个字符地读取字符串来创建树节点.例如,输入"HJIOADH"
和节点H J I O A D H
我注意到了
char node = reader.next().charAt(0); I can get the first char H by this
char node = reader.next().charAt(1); I can get the second char J by this
Run Code Online (Sandbox Code Playgroud)
我可以用一个循环来获取所有角色吗?喜欢
for i to n
node = reader.next().charAt(i)
Run Code Online (Sandbox Code Playgroud)
我试过但它不起作用.
我怎么想这样做?
非常感谢您的帮助.
扫描仪读卡器=新扫描仪(System.in); System.out.println("将节点输入为大写字母,不加空格,最后输入'/'); int i = 0; char node = reader.next().charAt(i); while(node!='/'){
CreateNode(node); // this is a function to create a tree node
i++;
node = reader.next().charAt(i);
}
Run Code Online (Sandbox Code Playgroud) 我创建了一个 dataproc 集群,并尝试提交我的本地作业进行测试。
gcloud beta dataproc clusters create test-cluster \
--region us-central1 \
--zone us-central1-c \
--master-machine-type n1-standard-4 \
--master-boot-disk-size 500 \
--num-workers 2 \
--worker-machine-type n1-standard-4 \
--worker-boot-disk-size 500 \
--image-version preview-ubuntu18 \
--project my-project-id \
--service-account my-service-account@project-id.iam.gserviceaccount.com \
--scopes https://www.googleapis.com/auth/cloud-platform \
--tags dataproc,iap-remote-admin \
--subnet my-vpc \
--properties spark:spark.jars=gs://spark-lib/bigquery/spark-bigquery-latest.jar
Run Code Online (Sandbox Code Playgroud)
尝试提交一个非常简单的脚本
import argparse
from datetime import datetime, timedelta
from pyspark.sql import SparkSession, DataFrame
def load_data(spark: SparkSession):
customers = spark.read.format('bigquery')\
.option('table', 'MY_DATASET.MY_TABLE')\
.load()
customers.printSchema()
customers.show()
if __name__ == '__main__':
spark …
Run Code Online (Sandbox Code Playgroud) google-bigquery apache-spark google-cloud-platform pyspark dataproc