OpenAI Unleashes GPT-4 Turbo, Expands Chatbot Customizability - Decrypt

OpenAI Unleashes GPT-4 Turbo, Expands Chatbot Customizability – Decrypt

Source Node: 2369512
Sam Altman opens OpenAI’s first developer conference, held on November 6 in San Francisco. Image: OpenAI/YouTube

OpenAI introduced GPT-4 Turbo at its inaugural developer conference today, describing it as a more potent and cost-effective successor to GPT-4. The update boasts enhanced context processing and the flexibility for fine-tuning to meet user requirements.

GPT-4 Turbo is available in two versions: one centered on text and another that also processes images. According to OpenAI, GPT-4 Turbo has been “optimized for performance,” with prices as low as $0.01 per 1,000 text tokens and $0.03 per 1,000 image tokens—nearly a third of GPT-4’s pricing.

ChatGPT custom made for you

How does this fine-tuning function make GPT-4 Turbo so special?

“Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks,” OpenAI explains. In essence, fine-tuning bridges the gap between generic AI models and customized solutions tailored to specific applications. It promises “higher quality results than prompting, token savings from shorter prompts, and faster request responses.”

Fine-tuning involves feeding a model extensive custom data to learn specific behaviors, transforming large generic models like GPT-4 into specialized tools for niche tasks without building an entirely new model. For example, a model tuned on medical information will provide more accurate results and will “speak” more like a doctor.

A good analogy can be seen in the world of image generators: fine-tuned models of Stable Diffusion tend to produce better images than the original Stable Diffusion XL or 1.5 because they have learned from specialized data.

Before this innovation, OpenAI permitted limited modifications to its LLMs’ behavior via custom instructions. This was already a significant leap in quality for those seeking customization in OpenAI’s models. Fine-tuning elevates this by introducing new data, tone, context, and voice to the model’s dataset.

The value of fine-tuning is significant. As AI becomes more integral to our daily lives, there’s a growing need for models attuned to specific needs.

“Fine-tuning OpenAI text generation models can make them better for specific applications, but it requires a careful investment of time and effort,” OpenAI notes in its official guide.

The company has been consistently enhancing its models in context, multimodal capabilities, and accuracy. With today’s announcement, this capability has no equal among mainstream closed-source LLMs like Claude or Google’s Bard.

While open-source LLMs like LlaMA or Mistral can be fine-tuned, they don’t measure up in power and professional usability.

The launch of GPT-4 Turbo and its emphasis on fine-tuning mark a significant shift in AI technology. Users can anticipate more personalized and efficient interactions, with potential impacts spanning from customer support to content creation.

Stay on top of crypto news, get daily updates in your inbox.

Time Stamp:

More from Decrypt