os::commit_memory 失败;错误=空间不足(errno=12)

问题描述 投票:0回答:1

我有一个在 ubuntu 20 ec2 机器上运行的 Spring Boot 应用程序,我在其中创建大约 200000 个线程来将数据写入 kafka。但是它反复失败并出现以下错误

[138.470s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f828d055000-0x00007f828d059000).
[138.470s][warning][os,thread] Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: [138.472s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
INFO: os::commit_memory(0x00007f828cf54000, 16384, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory.

我尝试将我的 ec2 实例的内存增加到 64 GB,但没有用。我正在使用 docker stats 和 htop 来监视进程的内存占用,当它达到 10 GB 左右时,它会失败并出现给定的错误。

我还尝试增加进程的堆大小和最大内存。

docker run --rm --name test -e JAVA_OPTS=-Xmx64g -v /workspace/logs/test:/logs -t test:master

下面是我的代码

    final int LIMIT = 200000;
    ExecutorService executorService = Executors.newFixedThreadPool(LIMIT);
    final CountDownLatch latch = new CountDownLatch(LIMIT);
    for (int i = 1; i <= LIMIT; i++) {
        final int counter = i;
        executorService.execute(() -> {
            try {
                kafkaTemplate.send("rf-data", Integer.toString(123), "asdsadsd");
                kafkaTemplate.send("rf-data", Integer.toString(123), "zczxczxczxc");
                latch.countDown();
            } catch (Exception e) {
                logger.error("Error sending data: ", e);
            }
        });
    }
    try {
        latch.await();
    } catch (InterruptedException e) {
        logger.error("error ltach", e);
    }
java spring-boot docker ubuntu
1个回答
0
投票

在 docker compose yml 文件中,您可以修改容器的内存限制。 您可以尝试在图像配置部分中增加该限制,如下所示:

  ...
    deploy:
      resources:
        limits:
          memory: <memory size>

更多信息这里

© www.soinside.com 2019 - 2024. All rights reserved.