Open AI just launched GPT-4 and it can handle both images and text.
It is a multimodal model - accepts both image and text inputs, emits text outputs.
Improved capabilities
- Greater creativity and advanced reasoning abilities.
- Accepts images as inputs enabling tasks such as caption generation and classification.
- Longer context of upto 25000 words allowing long-form content creation use cases
- Safer and more aligned. It is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on OpenAI’s internal evaluations.
Pricing
(seems a little steep to us)
- GPT-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.
- GPT-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens.
Availability
- API - You need to join the waitlist. Developers can get prioritized API access for contributing model evaluations to OpenAI Evals.
- ChatGPT Plus - ChatGPT Plus subscribers will get GPT-4 access on chat.openai.com with a dynamically adjusted usage cap.
- Duolingo, Khan Academy, Stripe, Be My Eyes, and Mem amongst others are already using it.
Limitations
- GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of specific applications.
- GPT-4 generally lacks knowledge of events that have occurred after September 2021 which is when the vast majority of its pre-training data cuts off, and the model does not learn from its experience.
- It can sometimes make simple reasoning errors that do not seem to comport with competence across so many domains or be overly gullible in accepting obviously false statements from a user.
- It can fail at hard problems the same way humans do, such as introducing security vulnerabilities into the code it produces.