When we work with large language models (LLMs), we typically have the model respond to the training data it’s been given. However, training these models can be difficult since they use a lot of resources, such as GPUs and power.
Thankfully, model optimization has advanced to allow for a “smaller training” version with less data, through a process called fine-tuning.
The specific sample solution below provides a method for fine-tuning an LLM using the Oracle Cloud Infrastructure (OCI) Generative AI playground, an interface in the OCI console.