For the past 18 months, I've been running SajuGPT, an AI chatbot focused on the specific domain of "Saju" – traditional Korean fortune-telling and cosmology. When we started, neither investors nor I were certain it would work. From the outset, we made a conscious decision to avoid relying on large API-based models like ChatGPT. Instead, we focused on fine-tuning smaller models (under 10 billion parameters). Our strategy wasn't to attract a few high-paying users, but to build a large, engaged base of free users.
This journey involved numerous experiments with various sub-10B models. The key takeaway? A well-tuned smaller model can achieve user satisfaction levels comparable to, or even exceeding, large language models (LLMs) within its specific niche. We proved this without significant advertising spend, relying almost entirely on viral growth and word-of-mouth.
The following