Zero to llama.cpp: Run Local LLMs on Windows with AMD GPUs
In a time when RAM and GPU prices are through the roof and every service wants its "just 7 bucks, bro" from you, where can you turn to utilize modern technologies in the form of Large Language Models? That's right, your gaming PC! That is if you're a gamer, of course. And if so, this post is just right for you.