chirospasm
Deliverer of ideas for a living. Believer in internet autonomy, dignity. I upkeep instances of FOSS platforms like this for the masses. Previously on Twitter under the same handle. I do software things, but also I don’t.
- 0 Posts
- 1 Comment
Joined 2 years ago
Cake day: June 5th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
Hello! I recently deployed GPUStack, a self-hosted GPU resource manager.
It helps you deploy AI models across clusters of GPUs, regardless of network or device. Got a Mac? It can toss a model on there and route it into an interface. Got a VM on a sever somewhere? Same. How about your home PC, with that beefy gaming GPU? No prob. GPUStack is great at scaling what you have on hand, without having to deploy a bunch of independent instances of ollama, llama.ccp, etc.
I use it to route pre-run LLMs into Open WebUI, another self-hosted interface for AI interactions, via the OpenAI API that both GPUStack and Open WebUI support!