Google Gemini 1.5 brings improved performance and more

Google has announced the latest version of its LLM, Google Gemini 1.5 and it brings improved performance and a range of other features. Gemini 1.5 Pro comes with a 128,000 token context window as standard, but Google is also letting some developers and enterprise customers test out a context window of up to 1 million tokens.

Gemini 1.5 leverages advanced Transformer and MoE (Mixture of Experts) architectures, enhancing efficiency and learning capability. Unlike conventional Transformers, MoE divides into smaller “expert” networks, dynamically engaging the most relevant ones based on input. This method significantly boosts efficiency.

Google’s pioneering work in MoE, including Sparsely-Gated MoE and various Transformer iterations, underscores its effectiveness. Gemini 1.5’s architectural improvements facilitate faster learning and high-quality outcomes with greater training and serving efficiency, accelerating the development and optimization of advanced versions.

The context window from Gemini 1.0 which has 32,000 tokens has been expanded to to 1 million tokens in production. This enables 1.5 Pro to handle large data sets in a single process, such as 1 hour of video, 11 hours of audio, codebases exceeding 30,000 lines, or over 700,000 words. Google has tested capacities of up to 10 million tokens.

Google has said that they are offering a limited preview of Gemini 1.5 Pro to developers and enterprise customers and when it is ready for wider release it will launch with the 128,000 token window as standard.

Source Google

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

See also  Apple Seeds First Beta of tvOS 17.5 to Developers

Leave a Comment