Categories
Featured

Intel agrega silenciosamente Jaguar Shores a su hoja de ruta Gaudi AI Accelerator mientras busca competir de manera más agresiva contra AMD y Nvidia.

[ad_1]


  • Intel apuesta fuerte por Jaguar Shores para competir en el espacio de la IA
  • Los chips Gaudí dan a Intel una oportunidad en el mercado de inferencia de IA
  • Intel espera que el Nodo 18A le dé una ventaja de fabricación

Intel Se ha estado construyendo silenciosamente para ella. Amnistía Internacional Chip Wallet, recientemente agregó el acelerador de IA Jaguar Shores a su hoja de ruta en un movimiento significativo en su intento por competir con empresas como Nvidia y AMD.

El acelerador de IA Jaguar Shores, presentado en la reciente conferencia de supercomputación SC2024, es una parte clave de la estrategia de Intel para seguir siendo competitivo. Aunque los detalles son escasos, es probable que Jaguar Shores sea el sucesor de Falcon Shores, cuyo lanzamiento está previsto para 2025.

[ad_2]

Source Article Link

Categories
Featured

Samsung is going after Nvidia’s billions with new AI chip — Mach-1 accelerator will combine CPU, GPU and memory to tackle inference tasks but not training

[ad_1]

Samsung is reportedly planning to launch its own AI accelerator chip, the ‘Mach-1’, in a bid to challenge Nvidia‘s dominance in the AI semiconductor market. 

The new chip, which will likely target edge applications with low power consumption requirements, will go into production by the end of this year and make its debut in early 2025, according to the Seoul Economic Daily.

[ad_2]

Source Article Link

Categories
News

SuperNIC network accelerator for AI cloud data

SuperNIC network accelerator for AI cloud data introduced by NVIDIA

The advent of artificial intelligence (AI) and its subsequent growth has brought about a significant shift in the technology landscape. One of the areas experiencing this transformation is cloud computing, where the traditional Ethernet-based cloud networks are being challenged to handle the computational requirements of modern AI workloads. This has led to the emergence of SuperNICs, a new class of network accelerators specifically designed to enhance AI workloads in Ethernet-based clouds.

SuperNICs, or Super Network Interface Cards, have unique features that set them apart from traditional network interface cards (NICs). These include high-speed packet reordering, advanced congestion control, programmable compute on the I/O path, power-efficient design, and full-stack AI optimization. These features are designed to provide high-speed network connectivity for GPU-to-GPU communication, with speeds reaching up to 400Gb/s using RDMA over RoCE technology.

The capabilities of SuperNICs are particularly crucial in the current AI landscape, where the advent of generative AI and large language models has imposed unprecedented computational demands. Traditional Ethernet and foundational NICs, which were not designed with these needs in mind, struggle to keep up. SuperNICs, on the other hand, are purpose-built for these modern AI workloads, offering efficient data transfer, low latency, and deterministic performance.

What is SuperNIC and why does it matter?

The comparison between SuperNICs and Data Processing Units (DPUs) is an interesting one. While DPUs offer high throughput and low-latency network connectivity, SuperNICs take it a step further by being specifically optimized for accelerating networks for AI. This optimization is evident in the 1:1 ratio between GPUs and SuperNICs within a system, a design choice that significantly enhances AI workload efficiency.

A prime example of this new technology is NVIDIA’s BlueField-3 SuperNIC, the world’s first SuperNIC for AI computing. Based on the BlueField-3 networking platform and integrated with the Spectrum-4 Ethernet switch system, this SuperNIC forms part of an accelerated computing fabric designed to optimize AI workloads.

The NVIDIA BlueField-3 SuperNIC offers several benefits that make it a valuable asset in AI computing environments. It provides peak AI workload efficiency, consistent and predictable performance, and secure multi-tenant cloud infrastructure. Additionally, it offers an extensible network infrastructure and broad server manufacturer support, making it a versatile solution for various AI needs.

The emergence of SuperNICs marks a significant step forward in the evolution of AI cloud computing. By offering high-speed, efficient, and optimized network acceleration, SuperNICs like NVIDIA’s BlueField-3 SuperNIC are poised to revolutionize the way AI workloads are handled in Ethernet-based clouds. As the AI field continues to grow and evolve, the role of SuperNICs in facilitating this growth will undoubtedly become more prominent.

Image Credit : NVIDIA

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

AMD Instinct Mi 300X generative AI accelerator

a slide showing the specifications of the new AMD Instinct Mi 300X

AMD has outlined its AI strategy this week, which includes delivering a broad portfolio of high-performance, energy-efficient GPUs, CPUs, and adaptive computing solutions for AI training and inference. To meet the demand for the explosion of AI applications AMD has launched the Instinct Mi 300X, the world’s highest performance accelerator for generative AI, built on the new CDNA 3 data center architecture optimized for performance and power efficiency.

The Mi 300X features significant improvements over previous generations, including a new compute engine, support for sparsity and the latest data formats, industry-leading memory capacity and bandwidth, and advanced process technologies and 3D packaging.

AMD Instinct Mi 300X designed for  generative AI

The AMD Instinct Mi 300X, is built on the CDNA 3 data center architecture, which is tailored for the demands of generative AI. This new accelerator boasts a unique compute engine, substantial memory capacity, and top-tier bandwidth. These features are fine-tuned for the latest data formats and are designed to handle the complexities of sparsity support.

The AI industry is on an upward trajectory, with the data center AI accelerator market expected to balloon from $30 billion in 2023 to a staggering $400 billion by 2027. AMD is strategically positioned to capitalize on this growth. The company’s focus on delivering peak performance, maximizing energy efficiency, and providing a varied computing portfolio is exemplified by the Mi 300X, which excels in generative AI tasks.

World’s most advanced accelerator for generative AI

AMD also recognizes the importance of networking in AI system performance. The company is a proponent of open ethernet, which facilitates efficient system communication. This commitment ensures that AMD’s solutions can scale quickly and maintain high performance levels.

The Mi 300a, AMD’s innovative data center APU for AI and high-performance computing (HPC), has begun volume production. This APU marks a significant advancement in performance and efficiency, furthering AMD’s AI strategy and reinforcing its dedication to cutting-edge innovation.

Software plays a critical role in the AI ecosystem, and AMD’s commitment to an open software environment is clear with the introduction of Rockham 6. This software is specifically designed for generative AI and large language models, providing developers with the tools they need to push the boundaries of AI technology.

picture of the AMD Instinct Mi 300X AI accelerator

AI’s influence is not limited to data centers; it’s also making its way into personal computing. AMD’s Ryzen 8040 series mobile processors, which feature integrated neural processing units (NPUs), are bringing AI capabilities to laptops and other personal devices. This move is democratizing AI technology, making it more accessible to a wider audience.

Collaboration is a key aspect of AMD’s strategy for AI innovation. The company works closely with industry leaders to ensure that its AI solutions are well-integrated and widely available. These partnerships are crucial for the development of next-generation AI applications.

AMD is not merely keeping up with the rapid advancements in AI; it is actively leading the charge. The introduction of pioneering products like the Instinct Mi 300X and the Mi 300a, combined with a strong commitment to open software and collaborative efforts, places AMD in a commanding position in the AI revolution, driving the future of computing forward.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Pocket AI RTX A500 palm sized GPU accelerator

ADLINK Pocket AI RTX A500 GPU accelerator 2023

The ADLINK Pocket AI, a portable GPU accelerator, is a unique device that is set to transform the way we work with artificial intelligence (AI). Powered by an NVIDIA RTX A500, this compact device is about the size of a pack of playing cards, making it a highly portable solution for AI developers, professional graphics users, and embedded industrial applications.

The Pocket AI is designed to boost productivity by improving work efficiency. It offers the ultimate in flexibility and reliability on the move, delivering a perfect balance between power and performance. This is made possible by the NVIDIA RTX GPU, which is renowned for its superior performance in AI and professional visual computing applications.

Pocket AI RTX A500 small portable GPU accelerator

Portable GPU accelerator

The partnership between ADLINK and NVIDIA, the industry leader in GPU technology, has resulted in this superior, portable accelerator. NVIDIA’s dedication to innovation and excellence in GPU technology aligns perfectly with ADLINK’s commitment to delivering best-in-class solutions. This collaboration has allowed ADLINK to offer customers the most advanced technology and full support.

 

The Pocket AI is equipped with an NVIDIA RTX A500 CPU and 4GB of GDDR6 RAM. It also boasts 2048 NVIDIA CUDA cores, 64 NVIDIA Tensor Cores, and 16 NVIDIA RT cores. This powerful combination allows the device to deliver 100 TOPS DENSE INT8 in inference and 6.54 TFLOPS Peak FP32 performance. The device also supports NVIDIA CUDA X and RTX Software Enhancements, further enhancing its capabilities.

ADLINK Pocket AI RTX A500 specifications

ADLINK Pocket AI RTX A500 specifications

Previous articles we have written that you might be interested in on the subject of NVIDIA hardware and technologies :

One of the key features of the Pocket AI is its connection via Thunderbolt 3, Thunderbolt 4, or USB 4. The Thunderbolt interface, popularized on laptops, thin clients, and compact PCs, has advanced to version 4.0 (backward compatible to 3.0) and adopted USB Type-C connections. This has led to a proliferation of peripherals or devices built on this technology. The Pocket AI takes advantage of the lightning-fast transfer speed (up to 40Gb/s) and general availability of Thunderbolt in modern hosts, creating an intuitive plug-and-play user experience with a hyper boost in productivity.

ADLINK Pocket AI

The Pocket AI is designed for AI accelerated learning, with a base clock of 435 MHz that can boost up to 1,335 MHz. It has 248 CUDA cores, 64 tensor cores, 16 RT cores, and 4 GB of GDDR6. Despite its powerful performance, the device only consumes 25 watts of power. However, it’s worth noting that the Pocket AI does not have a video out feature.

 

In terms of performance in gaming and AI tasks, the Pocket AI holds its own against integrated AMD Graphics and Intel Iris XE Graphics. However, there is potential for improvement with the addition of a video out feature. This would allow users to connect the device to an external display, further enhancing its versatility and usability.

The ADLINK Pocket AI is a compact, powerful, and highly portable GPU accelerator that is set to revolutionize the way we work with AI. Its superior performance, flexibility, and reliability make it an ideal solution for AI developers, professional graphics users, and embedded industrial applications. Despite some room for improvement, the Pocket AI featuring technologies from ADLINK and NVIDIA will soon be available to purchase and more details are available from the official product page.

Image Credit :  ETA Prime

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.