Skip to content

Auto detect MTGPU#342

Draft
yeahdongcn wants to merge 1 commit intodocker:auto-rocmfrom
makllama:auto-musa
Draft

Auto detect MTGPU#342
yeahdongcn wants to merge 1 commit intodocker:auto-rocmfrom
makllama:auto-musa

Conversation

@yeahdongcn
Copy link
Contributor

Testing Done

❯ make build
CGO_ENABLED=1 go build -ldflags="-s -w" -o model-runner ./main.go
❯ ./model-runner
WARN[0000] Could not read VRAM size: could not get nvidia VRAM size 
INFO[0000] Running on system with 31859 MB RAM          
INFO[0000] Successfully initialized store                component=model-manager
INFO[0000] LLAMA_SERVER_PATH: /Applications/Docker.app/Contents/Resources/model-runner/bin 
INFO[0000] Supported MTHREADS GPU detected, MUSA will be used automatically 
INFO[0000] Metrics endpoint enabled at /metrics         
INFO[0000] Supported MTHREADS GPU detected, using MUSA variant

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant