Skip to content

Several issues displaying log files #750

@f2bo

Description

@f2bo

1 - docker model logs hangs

Ever since I started using DMR, viewing the logs very frequently would hang while displaying an older entry to the point that I stopped using docker model logs and would simply look at the contents of the log files directly when I needed to troubleshoot something. So when I noticed that the latest release Docker Model Runner v1.1.11 included a bug fix that could result in an infinite loop when viewing the logs I naturally tested this again but was disappointed to see that the problem remains.

Platform: Windows/amd64
Docker Desktop: 4.64.0 (221278)
Docker Engine: 29.2.1

Image

Steps to reproduce

  • Delete inference-llama.cpp-server.log in C:\Users\[USERNAME]\AppData\Local\Docker\log\host to start from a known state that doesn't hang.

  • Execute docker model logs. For example:

    [2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store
    [2026-03-14T13:22:37.059515600Z][inference] 2 backends available
    [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization
    [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change
    [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe
    [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date
    [2026-03-14T13:22:38.733156000Z][inference] installed llama-server with gpuSupport=true
    [2026-03-14T13:22:38.733156000Z][inference][W] Backend installation failed for vllm: not implemented
    [2026-03-14T13:22:39.130815500Z][inference.model-manager] Listing available models
    [2026-03-14T13:22:39.134457300Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:22:39.176414700Z][inference.model-manager] Listing available models
    [2026-03-14T13:22:39.180820600Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:22:39.599588200Z][inference.model-manager] Listing available models
    [2026-03-14T13:22:39.603603900Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:23:02.923335700Z][inference.model-manager] Listing available models
    
  • Create an inference-llama.cpp-server.log log file in C:\Users\[USERNAME]\AppData\Local\Docker\log\host with a single entry replacing [TIMESTAMP] with a date and time that falls within the range of timestamps from the previous output but that doesn't match an existing timestamp, for example, [2026-03-14T13:22:38.733156001Z].

    [TIMESTAMP] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from llama.cpp log file
    
  • Execute docker model logs again. Notice that the output now includes the entry from the inference-llama.cpp-server.log file in its chronological position.

    [2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store
    [2026-03-14T13:22:37.059515600Z][inference] 2 backends available
    [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization
    [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change
    [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe
    [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date
    [2026-03-14T13:22:38.733156000Z][inference] installed llama-server with gpuSupport=true
    [2026-03-14T13:22:38.733156000Z][inference][W] Backend installation failed for vllm: not implemented
    [2026-03-14T13:22:38.733156001Z] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from llama.cpp log file
    [2026-03-14T13:22:39.134457300Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:22:39.176414700Z][inference.model-manager] Listing available models
    [2026-03-14T13:22:39.180820600Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:22:39.599588200Z][inference.model-manager] Listing available models
    [2026-03-14T13:22:39.603603900Z][inference.model-manager] Successfully listed models, count: 8
    [2026-03-14T13:23:02.923335700Z][inference.model-manager] Listing available models
    
  • Now replace the timestamp in the inference-llama.cpp-server.log file to match one in the inference.log file, for example, [2026-03-14T13:22:38.733156000Z].

  • Execute docker model logs once again and notice that it hangs after displaying the last entry matching the timestamp entered.

    [2026-03-14T13:22:37.058967500Z][inference.model-manager] Successfully initialized store
    [2026-03-14T13:22:37.059515600Z][inference] 2 backends available
    [2026-03-14T13:22:37.949478600Z][inference] Reconciling service state on initialization
    [2026-03-14T13:22:37.950040600Z][inference] Reconciling service state on settings change
    [2026-03-14T13:22:37.950040600Z][inference] downloadLatestLlamaCpp: latest, cpu, C:\Program Files\Docker\Docker\resources\model-runner\bin, <HOME>\.docker\bin\inference\com.docker.llama-server.exe
    [2026-03-14T13:22:38.173678600Z][inference] current llama.cpp version is already up to date
    

This seems to be the same problem that was supposedly fixed in the latest release, so either the release notes are inaccurate and this bug fix was omitted or the corresponding PR did not really fix the problem.

2 - Multiple processes created when viewing logs in the Docker Desktop Dashboard

When viewing the Models section, each time you click the Logs tab, it launches a new docker-model.exe instance.

Image

These processes don't exit, so after clicking just a few times, you have the following situation.

Image

This problem was previously reported here.

3 - Viewing the logs in WSL is unsupported

This has always been the case ever since I started using DMR and may not be a problem as the message is very clear, but I find it surprising and often wondered whether it's really unsupported or something specific to my environment.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions