in

OpenAI’s budget GPT-4o mini model is now cheaper to fine-tune, too

Jonathan Kitchen/Getty Images

A popular strategy for engaging with generative AI chatbots is to start with a well-crafted prompt. In fact, prompt engineering is an emerging skill for those pursuing career advancement in this age of artificial intelligence.

However, there is an alternative. For developers who have a budget to spend on the development of large language models (LLMs)  and a bunch of custom data of their own, “fine-tuning” an AI model can be — in some cases — a superior approach. 

But fine-tuning can be costly, and the good news is that OpenAI on Tuesday announced it is offering drastically cheaper fine-tuning for its GPT-4o mini AI model, introduced last week.

Also: OpenAI offers GPT-4o mini to slash the cost of applications

A fine-tuning process involves subjecting an AI model to a new round of training after the initial training of the model. By uploading some data and running the training again, the neural “weights” — or “parameters” — of the model are changed from the stock version of the model. 

The result is a model that may emphasize more of the data in the new training data set when prompted for predictions than is the case for the plain-vanilla model. 

<!–>

A neural network such as GPT-4o mini reflects a probability distribution, and its output (that is, its predictions) is simply the most likely text that follows the user’s prompt. By fine-tuning, one shifts that probability distribution in a certain direction. As a result, the model’s answers shift as well, to reflect the modified probability distribution. 

Fine-tuning is thus a means of nudging the prompt in the direction one wishes. 

The cost of fine-tuning GPT-4o mini starts at $3 per million tokens used to train, according to OpenAI’s pricing guide. That’s less than half the $8 cost of GPT-3.5 “Turbo.” 

OpenAI is offering a deal of two million free tokens daily to qualifying institutions, through September 23. 

Also: Millennial men are most likely to enroll in gen AI upskilling courses, report shows

Note, however, that the price for a fine-tuned GPT-4o mini is twice the price of the generic GPT-4o mini, at 30 cents per one million tokens of input to the model, and $1.20 per million output tokens – meaning, the tokens you use to prompt and then receive predictions.

In addition to the cost advantage, OpenAI emphasizes that the amount of training data that can be fed into the model for fine-tuning is four times as much as for GPT-3.5, at 65,000 tokens.  

Note that fine-tuning is only available for GPT-4o mini’s textual functionality, not its image tasks.

Before fine-tuning, it’s worth considering other options. Continuing to refine prompts is still a good strategy, especially as refined prompts can be a help even after the model has been fine-tuned, according to documentation by OpenAI’s fine-tuning documentation.

Another approach to getting more tailored results from LLMs is to use “retrieval-augmented generation (RAG),” an increasingly popular engineering approach that involves having the model make calls to an external source of truth, such as a database. 

While RAG can make each query more cumbersome, in a sense, by requiring the model to phone home to the database, it has advantages as well. When fine-tuning a model, it’s possible for the model to unlearn what was acquired in the original training stage. Tampering with the model parameters, in other words, can produce setbacks in terms of the broader, more general functionality that a model possesses.

Also: Make room for RAG: How Gen AI’s balance of power is shifting

A third alternative besides prompt engineering and RAG – but closely related to RAG – is function-calling. In those cases, very specific questions can be injected into the prompt, and the requirement for a very specific form of answer, and that can be bundled up and sent to an external application as a function call. OpenAI and others are referring to this as function-calling, tool use, and “agentic AI.” 

All these approaches will find their place, but at least fine-tuning experiments will cost a little less with OpenAI’s new prices. 

Note that Google also offers fine-tuning for its models, through its Vertex AI program, and many other model providers do as well. 

Re-training models is likely to become more commonplace, and may even make it to mobile devices someday, with sufficient computing power. 

The best headphones for working out: Expert tested and reviewed

One of the best E Ink tablets I’ve tested is not a ReMarkable or Kindle Paperwhite