As an unconditional fan of language models such as LLaMA or GPT, I recently discovered, thanks to Hervé, a loyal reader of the site, this rare gem: a web interface that makes executing LLMs very easy.
The goal of this project is to become the stable-broadcast-webui for text generation.
Called Text Generation Web UI, the tool based on Gradio offers an impressive list of features including:
- 3 modes suitable for different types of users (easy/beginner, intermediate and advanced/expert)
- Several supported models so as not to be limited to a single technology.
- A well-designed drop-down menu that also allows you to quickly switch from one model to another.
- The inclusion of a system called LoRA (for Low-Rank Adaptation of Large Language Models) offering smooth management of loading and unloading running models.
- In addition to classic Markdown output with LaTeX rendering, you also have the possibility of working with HTML output, especially for GPT4Chan (yes).
And that’s just the beginning!
Text Generation Web UI also offers one-click installers for Windows, Linux and macOS. However, please note that the AMD version does not work on Windows. But don’t worry! You can still manually install the interface using Conda.
Detailed instructions can be found on the official PyTorch website and in the project documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs.
If you are looking for clear tutorials for installing and using the Gradio web UI, the resources and guides provided in the project documentation should meet your expectations. And as a bonus, there are even tips for managing memory errors and optimizing performance with older graphics cards.
In conclusion, if you are looking for an effective tool for working with language models, look no further!