小编bou*_*ert的帖子

客户端SocketInputStream.close()导致更多的资源消耗?

如果我在下面执行JUnit测试而不使用"inputStream.close()"行(见下文),则可以处理超过60000个请求(然后我终止了该过程).有了这一行,我没有管理超过15000个请求,因为:

java.net.SocketException: No buffer space available (maximum connections reached?): connect
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
    at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
    at java.net.Socket.connect(Socket.java:529)
    at java.net.Socket.connect(Socket.java:478)
    at java.net.Socket.<init>(Socket.java:375)
    at java.net.Socket.<init>(Socket.java:189)
    at SocketTest.callServer(SocketTest.java:60)
    at SocketTest.testResourceConsumption(SocketTest.java:52)
Run Code Online (Sandbox Code Playgroud)

我在Windows上运行它,在开始测试之前我等待netstat列表恢复正常.

问题:

  • 为什么在这种情况下在客户端调用socketInputStream.close()会有害?
  • 或者代码有什么问题?

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.ServerSocket;
import java.net.Socket;

import junit.framework.TestCase;

public class SocketTest extends TestCase {
    private static final int PORT = 12345;
    private ServerSocket serverSocket;

    public void setUp() throws Exception {
        serverSocket = new ServerSocket(PORT);

        new Thread(new Runnable() {
            @Override …
Run Code Online (Sandbox Code Playgroud)

java sockets windows performance

5
推荐指数
1
解决办法
270
查看次数

为什么(在“集群”模式下)我的 UDF 在本地(在驱动程序中)而不是在工作线程上执行

两个sparkworker正在运行,代码如下(JUnit:

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import org.apache.commons.lang3.tuple.ImmutablePair;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.api.java.UDF1;
import org.apache.spark.sql.functions;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.Metadata;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.testng.annotations.Test;

public class UdfTest {

    @Test
    public void simpleUdf() {
        SparkConf conf = new SparkConf()
                .set("spark.driver.host", "localhost")
                .setMaster("spark://host1:7077")
                .set("spark.jars", "/home/.../myjar.jar")
                .set("spark.submit.deployMode", "cluster")
                .setAppName("RESTWS ML");

        SparkSession sparkSession = SparkSession.builder().config(conf).getOrCreate();

        List<Row> rows = new ArrayList<>();
        for (long i = 0; i < 10; i++) {
            rows.add(RowFactory.create("cr" + i)); …
Run Code Online (Sandbox Code Playgroud)

java user-defined-functions apache-spark

1
推荐指数
1
解决办法
1213
查看次数