小编Kat*_* A.的帖子

是否可以通过编程方式启用远程jmx监控?

我需要以编程方式启动一个新的java进程并动态设置JMX端口.所以不要这样做

-Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.port=9995 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
Run Code Online (Sandbox Code Playgroud)

我想做以下几点

System.setProperty("java.rmi.server.hostname", "127.0.0.1" );
System.setProperty("com.sun.management.jmxremote", "true" );
System.setProperty("com.sun.management.jmxremote.authenticate", "false" );
System.setProperty("com.sun.management.jmxremote.ssl", "false" );
System.setProperty("com.sun.management.jmxremote.port", "9995"  );
Run Code Online (Sandbox Code Playgroud)

但它不起作用.知道为什么吗?

java monitoring jmx multiprocessing

9
推荐指数
1
解决办法
2733
查看次数

设备异常、亚马逊 EMR 中型实例和 S3 上没有剩余空间

我在 Amazon EMR 上运行 MapReduce 作业,它创建 40 个输出文件,每个文件大约 130MB。最后 9 个减少任务失败,并显示“设备上没有剩余空间”异常。这是集群错误配置的问题吗?作业运行没有问题,输入文件更少,输出文件更少,reducer 更少。任何帮助都感激不尽。谢谢!完整的堆栈跟踪如下:

Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.security.DigestOutputStream.write(DigestOutputStream.java:148)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.write(MultipartUploadOutputStream.java:135)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:60)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:83)
at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:105)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:111)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:558)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Run Code Online (Sandbox Code Playgroud)

编辑

我做了一些进一步的尝试,但不幸的是我仍然遇到错误。我认为由于下面评论中提到的复制因子,我的实例可能没有足够的内存,所以我尝试使用大型而不是迄今为止我一直在试验的中型实例。但这次我又遇到了一个例外:

Error: java.io.IOException: Error closing multipart upload
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.uploadMultiParts(MultipartUploadOutputStream.java:207)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:222)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:105)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:106)
at …
Run Code Online (Sandbox Code Playgroud)

storage hadoop amazon-s3 amazon-web-services emr

6
推荐指数
1
解决办法
2763
查看次数