-
Notifications
You must be signed in to change notification settings - Fork 102
Description
1 - docker model logs hangs
Ever since I started using DMR, viewing the logs very frequently would hang while displaying an older entry to the point that I stopped using docker model logs and would simply look at the contents of the log files directly when I needed to troubleshoot something. So when I noticed that the latest release Docker Model Runner v1.1.11 included a bug fix that could result in an infinite loop when viewing the logs I naturally tested this again but was disappointed to see that the problem remains.
Platform: Windows/amd64
Docker Desktop: 4.64.0 (221278)
Docker Engine: 29.2.1
Steps to reproduce
-
Delete
inference-llama.cpp-server.loginC:\Users\[USERNAME]\AppData\Local\Docker\log\hostto start from a known state that doesn't hang. -
Execute
docker model logs. For example:[2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store [2026-03-14T13:22:37.059515600Z][inference] 2 backends available [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date [2026-03-14T13:22:38.733156000Z][inference] installed llama-server with gpuSupport=true [2026-03-14T13:22:38.733156000Z][inference][W] Backend installation failed for vllm: not implemented [2026-03-14T13:22:39.130815500Z][inference.model-manager] Listing available models [2026-03-14T13:22:39.134457300Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:22:39.176414700Z][inference.model-manager] Listing available models [2026-03-14T13:22:39.180820600Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:22:39.599588200Z][inference.model-manager] Listing available models [2026-03-14T13:22:39.603603900Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:23:02.923335700Z][inference.model-manager] Listing available models -
Create an
inference-llama.cpp-server.loglog file inC:\Users\[USERNAME]\AppData\Local\Docker\log\hostwith a single entry replacing[TIMESTAMP]with a date and time that falls within the range of timestamps from the previous output but that doesn't match an existing timestamp, for example,[2026-03-14T13:22:38.733156001Z].[TIMESTAMP] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from llama.cpp log file -
Execute
docker model logsagain. Notice that the output now includes the entry from theinference-llama.cpp-server.logfile in its chronological position.[2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store [2026-03-14T13:22:37.059515600Z][inference] 2 backends available [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date [2026-03-14T13:22:38.733156000Z][inference] installed llama-server with gpuSupport=true [2026-03-14T13:22:38.733156000Z][inference][W] Backend installation failed for vllm: not implemented [2026-03-14T13:22:38.733156001Z] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from llama.cpp log file [2026-03-14T13:22:39.134457300Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:22:39.176414700Z][inference.model-manager] Listing available models [2026-03-14T13:22:39.180820600Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:22:39.599588200Z][inference.model-manager] Listing available models [2026-03-14T13:22:39.603603900Z][inference.model-manager] Successfully listed models, count: 8 [2026-03-14T13:23:02.923335700Z][inference.model-manager] Listing available models -
Now replace the timestamp in the
inference-llama.cpp-server.logfile to match one in theinference.logfile, for example,[2026-03-14T13:22:38.733156000Z]. -
Execute
docker model logsonce again and notice that it hangs after displaying the last entry matching the timestamp entered.[2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store [2026-03-14T13:22:37.059515600Z][inference] 2 backends available [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date
This seems to be the same problem that was supposedly fixed in the latest release, so either the release notes are inaccurate and this bug fix was omitted or the corresponding PR did not really fix the problem.
2 - Multiple processes created when viewing logs in the Docker Desktop Dashboard
When viewing the Models section, each time you click the Logs tab, it launches a new docker-model.exe instance.
These processes don't exit, so after clicking just a few times, you have the following situation.
This problem was previously reported here.
3 - Viewing the logs in WSL is unsupported
This has always been the case ever since I started using DMR and may not be a problem as the message is very clear, but I find it surprising and often wondered whether it's really unsupported or something specific to my environment.