Feedback
article/comments
article/share
News
OpenAI Unveils GPT-4o: A Faster and Smarter ChatGPT Model

Keşfet ile ziyaret ettiğin tüm kategorileri tek akışta gör!

category/test-white Test
category/gundem-white Gündem
category/magazin-white Magazin
category/video-white Video

OpenAI Unveils GPT-4o: A Faster and Smarter ChatGPT Model

OpenAI has officially announced GPT-4o, a faster iteration of ChatGPT that boasts enhanced capabilities in text, image, and voice processing. During a live broadcast announcement on Monday, OpenAI's CTO, Mira Murati, highlighted the upgraded model's significantly improved speed and its advancements in handling various types of data. Discover more about GPT-4o's features and enhancements below.

Scroll Down to Continue
Advertisement

Recently, the most widely used artificial intelligence assistant ChatGPT has made a name for itself worldwide.

Recently, the most widely used artificial intelligence assistant ChatGPT has made a name for itself worldwide.

OpenAI has now announced GPT-4o, a faster model of ChatGPT.

OpenAI's CTO, Mira Murati, stated in a live broadcast announcement on Monday that the updated model is "much faster" and has improved "skills in text, image, and sound."

OpenAI's CTO, Mira Murati, stated in a live broadcast announcement on Monday that the updated model is "much faster" and has improved "skills in text, image, and sound."

Murati also added that it will be free for all users, with paid users continuing to have 'up to five times the usage rights' of free users.

In a blog post, OpenAI mentioned that GPT-4o's capabilities will be "gradually rolled out."

In a blog post, OpenAI mentioned that GPT-4o's capabilities will be "gradually rolled out."

However, text and image capabilities will start being available in ChatGPT today.

OpenAI CEO Sam Altman mentioned that the model is "naturally multimodal," meaning it can generate content or understand commands in audio, text, or images.

OpenAI CEO Sam Altman mentioned that the model is "naturally multimodal," meaning it can generate content or understand commands in audio, text, or images.

Altman also added that developers interested in working with GPT-4o on X will have access to the GPT-4 Turbo API, which is half the price and twice as fast.

New features are coming to ChatGPT's voice mode as part of the new model.

New features are coming to ChatGPT's voice mode as part of the new model.

The application will function as a real-time, observation-based voice assistant similar to Her. Currently, the voice mode is limited, responding to one command at a time and working only with what it can hear.

Scroll Down to Continue
Advertisement

Altman evaluated OpenAI's direction in a blog post following the live broadcast event.

Altman evaluated OpenAI's direction in a blog post following the live broadcast event.

While stating that the company's original vision was to 'create all possibilities for the world,' he acknowledged that the vision has changed. Contradictory reports before today's GPT-4o launch speculated that OpenAI would announce an artificial intelligence search engine rivaling Google and Perplexity, a voice assistant added to GPT-4, or a completely new and improved model, GPT-5.

Scroll Down for Comments and Reactions
Advertisement
category/eglence REACT TO THIS CONTENT WITH EMOJI!
0
0
0
0
0
0
0
Scroll Down for Comments
Advertisement
WHAT ARE ONEDIO MEMBERS SAYING?
Send Comment