Together AI has announced the release of its new Inference Engine 2.0, which includes the highly anticipated Turbo and Lite endpoints. This new inference stack is designed to provide significantly faster decoding throughput and superior performance compared to existing solutions.
Performance Enhancements
According to together.ai, the Together Inference Engine 2.0 offers decoding throughput that is four times faster than the open-source vLLM and outperforms commercial solutions such as Amazon Bedrock, Azure AI, Fireworks, and Octo AI by 1.3x to 2.5x. The engine achieves over 400 tokens per second on Meta Llama 3 8B, thanks to advancements in FlashAttention-3, faster GEMM & MHA kernels, quality-preserving quantization, and speculative decoding.
New Turbo and Lite Endpoints
Together AI has introduced new Turbo and Lite endpoints, starting with Meta Llama 3. These endpoints aim to balance performance, quality, and cost, allowing enterprises to avoid compromises. Together Turbo closely matches the quality of full-precision FP16 models, while Together Lite offers the most cost-efficient and scalable Llama 3 models available.
Together Turbo endpoints provide fast FP8 performance while maintaining quality, matching FP16 reference models and outperforming other FP8 solutions on AlpacaEval 2.0. These Turbo endpoints are priced at $0.88 per million tokens for 70B and $0.18 for 8B, making them significantly more affordable than GPT-4o.
Together Lite endpoints use INT4 quantization to offer high-quality AI models at a lower cost, priced at $0.10 per million tokens for Llama 3 8B Lite, which is six times lower than GPT-4o-mini.
Adoption and Endorsements
Over 100,000 developers and companies, including Zomato, DuckDuckGo, and the Washington Post, are already utilizing the Together Inference Engine for their Generative AI applications. Rinshul Chandra, COO of Food Delivery at Zomato, praised the engine for its high quality, speed, and accuracy.
Technical Innovations
The Together Inference Engine 2.0 incorporates several technical advancements, including FlashAttention-3, custom-built speculators, and quality-preserving quantization techniques. These innovations contribute to the engine’s superior performance and cost-efficiency.
Future Outlook
Together AI plans to continue pushing the boundaries of AI acceleration. The company aims to extend support for new models, techniques, and kernels, ensuring the Together Inference Engine remains at the forefront of AI technology.
The Turbo and Lite endpoints for Llama 3 models are available starting today, with plans to expand to other models soon. For more information, visit the Together AI pricing page.
Image source: Shutterstock
- SEO Powered Content & PR Distribution. Get Amplified Today.
- PlatoData.Network Vertical Generative Ai. Empower Yourself. Access Here.
- PlatoAiStream. Web3 Intelligence. Knowledge Amplified. Access Here.
- PlatoESG. Carbon, CleanTech, Energy, Environment, Solar, Waste Management. Access Here.
- PlatoHealth. Biotech and Clinical Trials Intelligence. Access Here.
- Source: https://Blockchain.News/news/together-ai-unveils-inference-engine-2-0-turbo-lite-endpoints
- :has
- :is
- 000
- 1
- 10
- 100
- 2024
- 400
- 7
- 8
- a
- acceleration
- accuracy
- Achieves
- advancements
- affordable
- AI
- AI models
- aim
- aims
- Allowing
- already
- Amazon
- and
- announced
- Anticipated
- applications
- ARE
- AS
- At
- author
- available
- avoid
- Azure
- Balance
- blockchain
- boundaries
- by
- closely
- commercial
- Companies
- company
- compared
- compromises
- continue
- contribute
- coo
- Cost
- Custom-built
- Decoding
- delivery
- designed
- developers
- end
- Engine
- enhanced
- ensuring
- enterprises
- existing
- Expand
- extend
- FAST
- faster
- Figure
- fireworks
- food
- food delivery
- For
- forefront
- four
- generative
- Generative AI
- High
- high-quality
- highly
- http
- HTTPS
- in
- includes
- Including
- incorporates
- info
- information
- innovations
- introduced
- ITS
- jpg
- launches
- lead
- Llama
- lower
- Maintaining
- Making
- matches
- matching
- Meta
- million
- models
- more
- most
- New
- news
- of
- offer
- offering
- Offers
- on
- open source
- Other
- outperforming
- Outperforms
- over
- per
- performance
- plans
- plato
- Plato Data Intelligence
- PlatoData
- Post
- Praised
- provide
- Pushing
- quality
- reference
- release
- remains
- s
- scalable
- Second
- several
- significantly
- SIX
- Solutions
- soon
- Source
- speculative
- speed
- stack
- Starting
- such
- superior
- support
- Technical
- techniques
- Technology
- than
- Thanks
- that
- The
- The Washington Post
- their
- Them
- These
- this
- throughput
- times
- to
- today
- together
- Tokens
- Unveils
- use
- Utilizing
- Visit
- washington
- washington post
- which
- while
- with
- zephyrnet