Huawei parece dispuesto a lanzar un nuevo chip de servidor HBM para desafiar a Xeon y Epyc; Sí, esta es la misma memoria que alimenta las GPU AI de Nvidia y AMD.

Huawei parece dispuesto a lanzar un nuevo chip de servidor HBM para desafiar a Xeon y Epyc; Sí, esta es la misma memoria que alimenta las GPU AI de Nvidia y AMD.

Huawei puede agregar soporte de HBM al SoC Kunpeng Las pistas apuntan a un sustituto del Kunpeng 920, lanzado en 2019 El nuevo SoC de HBM puede apuntar a competidores en el mercado de servidores HPC Los ingenieros de Huawei tienen se dice Ha lanzado nuevos parches de Linux para habilitar la compatibilidad con controladores … Read more

After Nvidia, Samsung vows to abandon consumer focus and concentrate on lucrative enterprise market instead — surge in HBM, enterprise SSD, DDR5 server memory chip expected to drive margins

After Nvidia, Samsung vows to abandon consumer focus and concentrate on lucrative enterprise market instead — surge in HBM, enterprise SSD, DDR5 server memory chip expected to drive margins

Nvidia has transformed into an AI superpower, becoming the third most valuable company in the world off the back of it, so it’s perhaps no surprise other tech giants are looking on in envy and shifting their focus to follow suit. During its recent earnings call, Samsung reported a consolidated operating profit of $4.8 billion in … Read more

Samsung’s archrival strikes crucial partnership with Nvidia’s closest ally to deliver key next-gen memory — SK Hynix teams up with TSMC to advance HBM development but could this move encourage TSMC to become like Intel?

Samsung’s archrival strikes crucial partnership with Nvidia’s closest ally to deliver key next-gen memory — SK Hynix teams up with TSMC to advance HBM development but could this move encourage TSMC to become like Intel?

South Korean memory giant SK Hynix, which recently announced plans for the construction of the world’s largest chip factory, has now announced a major partnership with top Taiwanese semiconductor foundry, TSMC.  The two firms aim to cement their positions in the fast-growing AI market by developing and producing the next-generation of High Bandwidth Memory, known … Read more

Lenovo unveils first all-AMD AI ‘supercomputer’ flanked with up to 1.5TB of HBM memory and promises drop-in support for the future AMD EPYC CPU — new ThinkSystem has dual EPYC CPUs and 8 Instinct MI300X GPUs

Lenovo unveils first all-AMD AI ‘supercomputer’ flanked with up to 1.5TB of HBM memory and promises drop-in support for the future AMD EPYC CPU — new ThinkSystem has dual EPYC CPUs and 8 Instinct MI300X GPUs

Lenovo has taken the wraps off its ThinkSystem SR685a V3 server, which it says is an optimal solution for both enterprise private on-prem AI as well as for public AI cloud service providers. Crafted in tandem with AMD, the server has been specifically engineered to handle the demanding compute needs associated with GenAI and Large … Read more

Another startup is taking on Nvidia using a clever trick — Celestial AI brings DDR5 and HBM together to slash power consumption by 90%, may already be partnering with AMD

Another startup is taking on Nvidia using a clever trick — Celestial AI brings DDR5 and HBM together to slash power consumption by 90%, may already be partnering with AMD

There’s no shortage of startups pushing technology that could one day prove pivotal in AI computing and memory infrastructure.  Celestial AI, which recently secured $175 million in Series C funding, is looking to commercialize its Photonic Fabric technology which aims to redefine optical interconnects.  Celestial AI’s foundational technology is designed to disaggregate AI compute from … Read more

Startup claims to boost LLM performance using standard memory instead of GPU HBM — but experts remain unconvinced by the numbers despite promising CXL technology

Startup claims to boost LLM performance using standard memory instead of GPU HBM — but experts remain unconvinced by the numbers despite promising CXL technology

MemVerge, a provider of software designed to accelerate and optimize data-intensive applications, has partnered with Micron to boost the performance of LLMs using Compute Express Link (CXL) technology.  The company’s Memory Machine software uses CXL to reduce idle time in GPUs caused by memory loading. The technology was demonstrated at Micron’s booth at Nvidia GTC … Read more

Samsung’s biggest memory rivals are plotting a tie-up with Kafkaesque implications — SK Hynix and Kioxia could build lucrative HBM chips for Nvidia, AMD and others

Samsung’s biggest memory rivals are plotting a tie-up with Kafkaesque implications — SK Hynix and Kioxia could build lucrative HBM chips for Nvidia, AMD and others

South Korean chipmaker SK Hynix, a key Nvidia supplier, says it has already sold out of its entire 2024 production of stacked high-bandwidth memory DRAMs, crucial for AI processors in data centers. That’s a problem, given just how in demand HBM chips are right now. However, a solution might have presented itself, as reports say … Read more