Skip to main content

GPT-4o and GPT-4 Fine-Tuning

Experimental access

GPT-4o and GPT-4 fine-tuning (text only) were initially developed as part of an experimental access program. As of August 20, 2024, gpt-4o-2024-08-06 fine-tuning has moved from experimental access to General Availability (GA).

We recommend developers opt to fine-tune gpt-4o-2024-08-06 over GPT-4—it's 2x faster, more than 10x cheaper for inference and 3x cheaper to train, and has higher rate limits. For those still interested in fine-tuning GPT-4, keep in mind that GPT-4 fine-tuning may require more work to achieve meaningful improvements over the base model, compared to the substantial gains realized with GPT-3.5 Turbo fine-tuning.

Pricing

gpt-4o-2024-05-13 and GPT-4 fine-tuning are offered at the following:

Model
Training
Input usage
Output usage
gpt-4o-2024-05-13
$45.00 / 1M tokens
$7.50 / 1M tokens
$22.50 / 1M tokens
gpt-4
$90.00 / 1M tokens
$45.00 / 1M tokens
$90.00 / 1M tokens