Categories
Featured

Meta sheds more light on how it is evolving Llama 3 training — it relies for now on almost 50,000 Nvidia H100 GPU, but how long before Meta switches to its own AI chip?

[ad_1]

Meta has unveiled details about its AI training infrastructure, revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM. 

The company says it will have over 350,000 Nvidia H100 GPUs in service by the end of 2024, and the computing power equivalent to nearly 600,000 H100s when combined with hardware from other sources.

[ad_2]

Source Article Link

By lisa nichols

Passionate about the power of words and their ability to inform, inspire, and ignite change, lisa Nichols is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, lisa Nichols brings a unique blend of creativity and accuracy to every piece

Leave a Reply

Your email address will not be published. Required fields are marked *