As next step I created a custom entrypoint.sh for the ldap container, and changed it in the docker compose file. In this file I deleted any JVM setup and added just `Xms` and `Xmx` paramters, both set to 4096m. The container memory has been set to 5000m in the docker file.
Both metrics and logging was turned on.
JMeter test has been run on it, for more than 8 hrs (aprox 7000 logins), after that it has been stopped, and the server left without pressure. After 24hrs plus, it runs out of memory. Heap was dumped with `jmap`, the size was minimal, so it was not the heap. After investigated memory consumption with `docker stat ldap` it shows 99% memory consumption, but it was not the heap.
Further investigation with `jstat` turned out that it was the Metaspace that occupied 90%+ of the memory. (the column with `M` -> 96.84%)
```
S0 S1 E O M CCS YGC YGCT FGC FGCT CGC CGCT GCT
0.00 0.00 0.86 3.65 96.84 94.59 299 17.654 102 369.959 7 0.598 388.210
```
So as next step the `Metaspace` size was set to `1024m` and the `Xms`, `Xms` to `3072m`. Results are under investigations. The server is running ok for 4 days, memory consumptiom seems to be stabilized.
As conclusion maybe setting also a limit for the Metaspace not just the 75% of container memory as heap seems to solve the issue, at least at first sight.
I will continue to investigate our deployment, maybe using the original `entrypoint.sh` and giving the Xmx Xms and Metadata size from env variable as it is suggested in the doc.