Change Location EUR
 
Mouser Europe - Flag Mouser Europe

Incoterms: DDP is available to customers in EU Member States.
All prices include duty and customs fees on select shipping methods.

Incoterms: DDU applies to most non-EU customers.
Duty, customs fees and taxes are collected at time of delivery.


Please confirm your currency selection:

Euros
Euros are accepted for payment only in EU member states these countries.

US Dollars
USD is accepted in all countries.

Other currency options may also be available - see
Mouser Worldwide.

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


GPT-3.5 vs. GPT-4: What's the Difference? Becks Simpson

(Source: ckybe - stock.adobe.com)

The generative pre-trained transformer (GPT) family of models from OpenAI has taken the world by storm, and keeping pace with the quickly emerging updates is sometimes difficult. These models have enabled use cases such as writing full articles, brainstorming ideas, and writing snippets of code, which were previously difficult, if not impossible, with existing natural language processing models. In order to make the best decisions on integrating GPTs into a product or company workflow, users should understand the differences in architecture, capability, and performance between each generation. Given both the release of GPT-3.5 and GPT-4 in quick succession and the differences in speed and pricing, this knowledge can help companies pick the right model for their use cases.

GPT-3.5 and GPT-4 Overview

For those who aren’t familiar with the GPT family of models, it is a series of language models developed by OpenAI that use deep learning to generate human-like responses to given prompts. GPT-3.5 and GPT-4 are the latest additions to this family. GPT-3.5 is an update to GPT-3 that includes an increase in the number of parameters from 175 billion to 355 billion. This increase allows for more accurate predictions and the ability to generate more complex and nuanced responses to prompts. GPT-3.5 can be accessed through OpenAI’s application programming interface (API) and integrated into developers’ own products or workflows. Additionally, for those interested in a user interface to interact with the model, GPT-3.5 currently is the model “under the hood” of the free version of ChatGPT.

On the other hand, GPT-4 claims to be the most powerful language model yet with much longer responses that are more sensitive to human commands. OpenAI has not released GPT-4’s exact number of parameters, but rumors suggest it might be in the trillions. GPT-4 can perform tasks such as writing essays, creating more complex code, and interacting with images as part of its prompting. As of April 2023, those who have requested access to the API will be able to use GPT-4 in beta, and those who have upgraded to Chat+ will have access to it using the ChatGPT interface.

Architecture, Capability, and Performance Differences

One of the main differences between GPT-3.5 and GPT-4 is in their architecture. GPT-3.5 partially uses the same architecture as GPT-3, which is a transformer-based model. However, the increase in the number of parameters allows for more complex calculations to be made during training and inference, resulting in more accurate and nuanced responses. Moreover, the architecture leverages reinforcement learning and human feedback to enhance performance, allowing the model to learn from guidance provided by people and improve its ability to follow human commands. When used in ChatGPT format, an added layer of safety serves as a guardrail for how the model responds to ensure that its answers are not harmful. The fine-tuned version used in ChatGPT format was also optimized for speed since it is expected to converse with people in a timely fashion.

GPT-4 uses a similar architecture to GPT-3.5 but extends it even further to allow multimodal inputs such as images. GPT-4 can understand and describe almost any image, from a screenshot of a Discord server to a hand-drawn mock-up of a website. It can even provide working code for a website that matches the image. This is a significant upgrade from GPT-3.5, which can only accept text prompts. The new architecture and model size also allow for a larger input context, meaning users can send more tokens (roughly equivalent to words) in a single shot. Previously, GPT-3.5’s context was 4,000 tokens, whereas GPT-4’s context ranges from 8,000 to 32,000 tokens depending on the model variation.

User Applications

For users who wish to integrate language models into their products or workflows, the choice between GPT-3.5 and GPT-4 will depend on their specific use cases.

GPT-3.5 is a good choice for applications that require accurate and nuanced language generation with low latency and low cost, such as chatbots or virtual assistants. The increase in the number of parameters allows for more accurate predictions, leading to better user experiences. Additionally, because GPT-3.5 is already available through OpenAI’s API, developers can start integrating it into their products without waiting for GPT-4 to emerge from beta.

GPT-4 is likely the better choice for applications that require longer, more complex language generation, such as writing essays or generating code. The higher number of parameters allows for more complex calculations, leading to better performance on these types of tasks. Any task that involves reading or writing extremely large blocks of text will also benefit from using GPT-4 over GPT-3.5. Additionally, use cases with a visual component where the input might be either text or an image will need to leverage GPT-4. This model is more expensive than GPT-3.5, so users should understand whether their use cases really need the power of GPT-4.

Conclusion

Clearly, OpenAI is pushing the boundaries of what is possible with language models, especially with the advent of GPT-3.5 and GPT-4. The differences in architecture and training data between the two provide users with a choice depending on their specific use cases and product requirements in terms of accuracy, speed, and cost. Whether users desire accurate and nuanced language generation for chatbots or more complex language generation for writing essays and generating code, a GPT model is available to suit their needs. These models will continue to evolve and shape an exciting future of language processing.



« Back


Becks Simpson is a Machine Learning Lead at AlleyCorp Nord where developers, product designers and ML specialists work alongside clients to bring their AI product dreams to life. In her spare time, she also works with Whale Seeker, another startup using AI to detect whales so that industry and these gentle giants can coexist profitably. She has worked across the spectrum in deep learning and machine learning from investigating novel deep learning methods and applying research directly for solving real world problems to architecting pipelines and platforms to train and deploy AI models in the wild and advising startups on their AI and data strategies.


All Authors

Show More Show More
View Blogs by Date

Archives