At MemCon 2024, Samsung showcased its latest HBM3E technology, talked about its future HBM4 plans, and unveiled the CXL Memory Module Box, also known as CMM-B, the latest addition to its Compute Express Link (CXL) memory module portfolio.
CMM-B is essentially a memory pooling appliance for rack computing leveraging CXL. It supports disaggregated memory allocation, allowing memory capacity available in remote locations to be shared across multiple servers. Through this, CMM-B enables independent resource allocation in the rack cluster and allows for larger pools of memory to be assigned as needed. With up to 60GB/s bandwidth, Samsung says CMM-B is ideal for applications like AI, in-memory databases, and data analytics.
CMM-B can accommodate eight E3.S form factor CMM-D (PCIe Gen5) memory modules for a total of 2TB. CMM-D memory integrates Samsung’s DRAM technology with the CXL open standard interface to deliver efficient, low-latency connectivity between the CPU and memory expansion devices.
Easy setup
Samsung says the CMM-B integrates seamlessly into Supermicro Plug and Play Rack Scale Solutions, ensuring not only faster productivity but also reduced total cost of ownership (TCO).
The CMM-B module comes pre-installed with Samsung’s Cognos Management Console (SCMC) software, which provides an intuitive interface for quick setup of the Rack-Scale server appliance. This software facilitates dynamic memory allocation, enabling memory to be allocated independently of the server to which it is attached.
During his keynote at MemCon, Jin-Hyeok Choi, Corporate Executive Vice President, Device Solutions Research America – Memory at Samsung Electronics said, “AI innovation cannot continue without memory technology innovation. As the market leader in memory, Samsung is proud to continue advancing innovation – from the industry’s most advanced CMM-B technology, to powerful memory solutions like HBM3E for high-performance computing and demanding AI applications. We are committed to collaborating with our partners and serving our customers to unlock the full potential of the AI era together.”
(Image credit: Samsung)
More from TechRadar Pro
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
At its simplest, RAM (Random Access Memory) is a type of computer memory, often referred to as short-term memory because it is volatile, meaning that the data is not saved when the power is turned off.
When business users switch on the computer, the operating system and applications are loaded to the computer RAM which is directly connected to the CPU, making the data quickly accessible for processing.
In corporate settings, RAM (memory modules) comes in different shapes and sizes. DIMM (Dual In-Line Memory Module) can be found in desktops, workstations and servers, while laptops require smaller physical size SODIMM (Small Outline DIMM).
A memory module contains several DRAM (Dynamic RAM) chips which is a type of semiconductor memory. Dynamic simply means that the data held by transistors in the chips is constantly refreshed. The number of DRAM chips found on a memory module varies depending on its capacity (8GB, 16GB, 32GB).
The lithography of DRAM chips has been revised and improved many times over recent decades and this has led not only to reductions in cost-per-bit, but also to reducing the dimensions of the component and increasing the clock rate. Overall, DRAM now delivers faster performance and higher capacities but uses less power which cuts energy costs, controls heat and extends battery life.
DRAM operate in one of two modes, synchronous or asynchronous. Asynchronous was the common DRAM technology used up until the end of the 1990s. Synchronous mode means that read, write and refresh operations are controlled with a system clock, synchronous with the clock speed of a computer’s CPU. Today’s computers use synchronous mode, or Synchronous Random Access Memory (SDRAM) which connects to the system board via a memory module.
Iwona Zalewska
DRAM business manager, Kingston EMEA.
New generations of DRAM
The latest version of SDRAM is DDR5 (Double Data Rate 5th generation), which comes in a range of standard speeds, starting with 4800M/Ts (megatransfers per second) and is an indicator of the speed at which data is transferred on and off the memory module. Approximately every seven years, a new memory generation is introduced, which is designed to accommodate the ever-increasing demand for speed, density and configurations in business computing environments. DDR5, for example, is designed with new features that provide higher performance, lower power and more robust data integrity for the next decade of computing. It debuted in 2021.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
IT decision makers who are considering purchasing memory must be aware that memory modules are not backwards compatible. DDR5 memory will not physically slot into a DDR4 or DDR3 memory socket. Within a memory generation, faster speeds are backwards compatible. For example, if a user buys a standard DDR5–5600MT/s module and uses it with a 12th Generation Intel processor, the speed memory will automatically ‘clock down’ to operate at 4800M/Ts, the speed supported by the host system or lower. This will vary depending on the model of the CPU and the number of memory modules installed in the system.
It’s essential to know the processor and motherboard already installed in the computer when planning on upgrading memory, but there are some other considerations too. Most PCs have four RAM sockets, some, such as workstations, have as many as eight, but laptops are likely to have only two accessible memory sockets, and in thin models, there may only be one.
Different types of RAM
Even though they may look similar and have the same function, the type of memory module found in HEDT (High-End Desktop) and servers is different than the ones found in PCs. Intel Xeon and the AMD Epyc range of server CPUs come with a higher number of CPU cores and more memory channels compared to Intel Core and AMD Ryzen desktop CPUs, therefore the specifications and features of the RAM for servers differ from the ones for PCs.
Server CPUs require Registered DIMM which supports the ECC (Error Correcting Code) feature, allowing to correct bits error occurring on the memory bus (between the memory controller and the DRAM chip), ensuring the integrity of the data. RDC (Registered Clock Driver) is an additional component found on RDIMM, not present on Unbuffered DIMM (UDIMM), and it ensures that all components on the memory module are operating at the same clock cycle allowing the system to remain stable when a high number of modules are installed.
The type of memory module made for desktops and laptops is generally Non-ECC Unbuffered DIMM. The data processed by users on these types of systems is considered less critical than the data being processed by servers which are hosting websites or handling online transactional processing, for example, and need to respect specific SLAs (Service-Level Agreements) and up times of 99.9999% 24/7. Non-ECC UDIMMs contain less components and features than RDIMMs and are therefore more affordable while remaining a reliable memory solution. Unbuffered types of RAM exist in both DIMM and SODIMM form factor.
Boosting performance
RAM memory is primarily sold in single modules, but it is also available in kits of two, four or eight, ranging in capacity from 4GB for DDR3 to 96GB for DDR5 (in single modules) and up to 256GB in kits (256GB is offered only as a kit of 8 in DDR4 and DDR 5). The configurations match the memory channel architecture, and when installed correctly can deliver a major boost in performance. To provide an example of the performance potential, upgrading a DDR5-4800MT/s module with a peak bandwidth of 38.4 GB/s to a dual channel setup, instantly expands the bandwidth to 76.8GB/s.
Accelerating speed
Users with industry standard speeds are limited to what their computer’s processor and motherboard will support, particularly if it won’t allow modules to be installed into a second memory bank. On a dual channel motherboard with four sockets, these are arranged in two memory banks, where each memory channel has two sockets. If a DDR5 user can install modules into a second bank, in most cases, the memory may be forced to clock-down to a slower speed to allow for limitations inside the processor.
Users looking for a considerable boost, such as gamers, can opt for overclockable memory. This can be done safely using Intel XMP and AMD EXPO profiles however, professional help is advisable. Selecting the right gaming memory for overclocking a system means deciding on price verses speed versus capacity, the potential limitations of motherboards and processors, and RGB versus non-RGB (to bring in the benefits of lighting).
Useful glossary of terms
Apart from the acronyms we’ve already explained above, here are some additional terms that it will be useful to know:
CPU – Central Processing Units are the core of the computer.
PMIC – Power Management Integrate Circuits help to regulate the power required by the components of the memory module. For server-class modules, the PMIC uses 12V; for PC-class modules, it uses 5V.
SPD hub – DDR5 uses a new device that integrates the Serial Presence Detect EEPROM with additional features, manages access to the external controller and decouples the memory load on the internal bus from external.
On-die ECC – Error Correction Code that mitigates the risk of data leakage by correcting errors within the chip, increasing reliability and reducing defect rates.
MHz – MHz is an abbreviation of megahertz and means a million cycles per second, or one million hertz. This unit of frequency measurement is used to denote the speed at which data moves within and between components.
MT/s is short for megatransfers (or million transfers) per second and is a more accurate measurement for the effective data rate (speed) of DDR SDRAM memory in computing.
Non-binary memory – The density of DRAM chips usually doubles with each iteration, but with DDR5, an intermediary density – 24Gbit – was introduced, which provides more flexibility and is called non-binary memory.
GB/s – Gigabytes per second. A Gigabyte is a unit of data storage capacity that is approximately 1 billion bytes. It has been a common unit of capacity measurement for data storage products since the mid-1980s.
Link!
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Samsung has revealed it expects to triple its HBM chip production this year.
“Following the third-generation HBM2E and fourth-generation HBM3, which are already in mass production, we plan to produce the 12-layer fifth-generation HBM and 32 gigabit-based 128 GB DDR5 products in large quantities in the first half of the year,” SangJoon Hwang, EVP and Head of DRAM Product and Technology Team at Samsung said during a speech at Memcon 2024.
“With these products, we expect to enhance our presence in high-performance, high-capacity memory in the AI era.”
Snowbolt
Samsung plans a 2.9-fold increase in HBM chip production volume this year, up from the 2.5-fold projection previously announced at CES 2024. The company also shared a roadmap detailing its future HBM production, projecting a 13.8-fold surge in HBM shipments by 2026 compared to 2023.
Samsung used Memcon 2024 to showcase its HBM3E 12H chip – the industry’s first 12-stack HBM3E DRAM – which is currently being sampled with customers. This will follow Micron’s 24GB 8H HBM3E into mass production in the coming months.
According to The Korea Economic Daily, Samsung also spoke of its plans for HBM4 and its sixth-generation HBM chip which the company has named “Snowbolt,”. Samsung says it intends to apply the buffer die, a control device, to the bottom layer of stacked memory for enhanced efficiency. It didn’t provide any information on when that future generation of HBM will see the light of day, however.
Despite being the world’s largest memory chipmaker, Samsung has lagged behind archrival SK Hynix in the HBM chip segment, forcing it to invest heavily to boost production of what is a crucial component in the escalating AI race due to its superior processing speed.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
SK Hynix isn’t going to make things easy for Samsung however. The world’s second largest memory chip maker recently announced plans to build the largest chip production facility ever seen at Yongin Semiconductor Cluster in Gyeonggi Province, South Korea.
Micron has showcased its colossal 256GB DDR5-8800 MCRDIMM memory modules at the recent Nvidia GTC 2024 conference.
The high-capacity, double-height, 20-watt modules are tailored for next-generation AI servers, such as those based on Intel‘s Xeon Scalable ‘Granite Rapid’ processors which require substantial memory for training.
Tom’s Hardware, which got to see the memory module first hand, and take the photo above, says the company displayed a ‘Tall’ version of the module at the GTC, but it also intends to offer Standard height MCRDIMMs suitable for 1U servers.
Multiplexer Combined Ranks DIMMs
Both versions of the 256GB MCRDIMMs are constructed using monolithic 32Gb DDR5 ICs. The Tall module houses 80 DRAM chips on each side, while the Standard module employs 2Hi stacked packages and will run slightly hotter as a result.
MCRDIMMs, or Multiplexer Combined Ranks DIMMs, are dual-rank memory modules that employ a specialized buffer to allow both ranks to operate concurrently.
As Tom’s Hardware explains, “The buffer allows the two physical ranks to act as if they were two separate modules working in parallel, thereby doubling performance by enabling the simultaneous retrieval of 128 bytes of data from both ranks per clock, effectively doubling the performance of a single module. Meanwhile, the buffer works with its host memory controller using the DDR5 protocol, albeit at speeds beyond those specified by the standard, at 8800 MT/s in this case.“
Customers keen to get their hands on the new memory modules won’t have long to wait. In prepared remarks for the company’s earnings call last week, Sanjay Mehrotra, chief executive of Micron, said “We [have] started sampling our 256GB MCRDIMM module, which further enhances performance and increases DRAM content per server.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Micron hasn’t announced pricing yet, but the cost per module is likely to exceed $10,000.
Panda London Memory Foam Bamboo Pillow: two-minute review
The Panda London Memory Foam Bamboo Pillow is the cheapest pillow of the two that Panda makes, but it doesn’t skimp on quality. It’s one of TechRadar’s best pillow selections and offers a luxurious feel with its soft bamboo cover. I’m a huge fan of the Panda London Hybrid Bamboo mattress, which is rated by TechRadar as one of the best mattresses currently on sale in the UK. But how did the pillow match up?
Like most memory foam pillows, the Panda isn’t going to win any design awards. It’s a lump of memory foam (three layers, to be exact), covered with a mesh polyester internal pillow protector and finished with a bamboo/polyester cover. Pick the pillow up and feel it, and it’s a different story. The bamboo cover feels luxurious to the touch and is really soft – this is a pillow you’ll want to lay your head on. I love the little panda face in the corner as well.
(Image credit: Future)
Not everyone is a fan of memory foam and, as someone who loves a soft pillow, I did initially find the Panda way too solid. But over time, I appreciated the neck support. As a lightweight side and back sleeper, the loft suited me perfectly. However, both my husband and I found it difficult to change positions on the pillow and neither of us particularly enjoyed the moulding sensation of the memory foam. However, if you are a fan of the memory foam ‘hug’, the Panda is a great pillow to go for. At under £50 it’s also very good value for a high-quality memory foam pillow.
Panda Bamboo Pillow review: price & value for money
Mid-range pricing, but a premium feel
30-day trial and 10-year warranty
Rarely discounted
The Panda pillow retails at £44.95 and comes in one size, fitting neatly into the mid-range pricing bracket. With its high-quality bamboo cover, it feels like a premium pillow and looks extremely smart (or, at least, as smart as a lump of foam can look). There’s a 30-day trial period, which is a real godsend if you’re not used to the feel of memory foam and need time to decide if it suits your sleeping style.
The 10-year warranty is particularly generous for a pillow – none of the other pillows in our best pillow round-up have longer than three years. And if you decide during the 30-day trial period that the pillow isn’t for you, you can return it for a full refund. Next day delivery and returns are also both free.
Panda discounts aren’t common. I’ve seen the occasional 10% off, but this is generally only around major sales events. I’d recommend bookmarking our mattress deals page, as it will keep you up to date with when the brand is having a sale.
Panda London Memory Foam Bamboo Pillow review: design & materials
Removable and washable cover
Three layers of visco memory foam
Internal mesh polyester inner cover
Like most memory foam pillows, the Panda is essentially just a lump of memory foam and not particularly enticing shape-wise. At 12cm deep, 40cm wide and 60cm long, it’s slightly shorter than a standard pillow at 70cm, but it still fits neatly into a pillow case. The 40% bamboo cover is the winning touch here – it feels soft and luxurious, and really adds a premium touch to the pillow.
(Image credit: Future)
I also really liked the inclusion of the inner mesh polyester cover, which acts as a natural pillow protector. This is stitched around the pillow to help protect the foam. Speaking of the memory foam, it’s Reach-certified, meaning no harmful chemical were used during manufacturing. And the bamboo cover is naturally breathable, antibacterial and hypoallergenic, making this a great choice for allergy sufferers.
Image 1 of 2
(Image credit: Future)
(Image credit: Future)
The pillow arrived wrapped in a reusable bamboo bag and a 100% biodegradable box, made from recycled paper that can obviously be recycled again. It was ready to sleep on straight out of the box and, somewhat unusually for memory foam, I didn’t notice any off-gassing at all.
Panda London Memory Foam Bamboo Pillow review: care & allergies
The bamboo cover is removable and can be washed on a cool wash of up to 40 degrees before being hung to dry. The memory foam can only be spot cleaned but, with the two covers in place, it should be well protected.
The memory foam in the pillow is Reach-certified and the pillow conforms to the OEKO-TEX standard, meaning that the whole pillow is free from harmful chemicals that could cause skin irritation. As I already mentioned, the bamboo cover is also naturally breathable, antibacterial and hypoallergenic, making this a great choice for allergy sufferers.
If you join Panda’s mailing list, the brand protects five trees. And the pillow is vegan- and bird-friendly, as well as only using bamboo from FSC approved forests where wildlife habitats are protected and monitored.
Panda London Memory Foam Bamboo Pillow review: comfort & support
Medium-firm feel
12cm loft
Best suited to back and side sleepers
As I mentioned at the top of this article, I had some quite differing feedback on this pillow from people I asked to test it. Both my husband and I struggled with the ‘hug’ of the memory foam, although it’s clearly not as pronounced as with some memory foam pillows. Despite this, I could see and feel how supportive it was around the neck and do think it would be an excellent choice for those prone to neck pain.
(Image credit: Future)
I would describe this pillow as medium-firm, with a medium loft of 12cm. It’s pretty lightweight for a memory foam pillow and overall it retained its shape well after getting up in the morning. But my husband, who weighs more than I do, found it quite difficult to change position during the night, feeling that the memory foam had a tendency to settle in one place, which then made his head just want to fall back into that ‘groove’.
Side and back sleepers should enjoy the feel of this pillow, which is nicely supportive but doesn’t throw the spine out of alignment. But a friend who sleeps on her stomach who tried the pillow felt that her head was being raised too high to keep the spine aligned.
(Image credit: Future)
I also gave the pillow to another friend who usually sleeps on a Tempur memory foam pillow. He had quite a different experience. As someone who is used to the heavier Tempur pillow and the more pronounced ‘hug’, he felt that the Panda wasn’t supportive enough for side sleeping. He also felt that the Panda compressed far more than a Tempur pillow and offered less resistance to a sleeper’s head. My feeling is that this makes the Panda pillow an excellent choice for memory foam novices, combining the best of memory foam along with a lighter feel.
Panda London Memory Foam Bamboo Pillow review: temperature regulation
Memory foam is notorious for trapping heat, which can make for an unpleasantly hot sleeping experience. But Panda has been clever here with the inclusion of a bamboo cover.
(Image credit: Future)
Bamboo is naturally breathable and helps to encourage airflow. This, combined with the inner mesh cover, meant that I never felt warm on the pillow. Neither did my husband, which is perhaps of more relevance, as he has a tendency to sleep warm. The pillow doesn’t feel cool to the touch but does feel fairly neutral. For context, we were sleeping in a room that was around 14-15C overnight.
Panda London Memory Foam Bamboo Pillow review: specs
Swipe to scroll horizontally
Fill
Visco memory foam
Cover
40% bamboo / 60% polyester
Dimensions
60 x 40 x 12cm
Loft
Medium
Firmness
Medium-firm
Care
Removable and washable cover, interior can only be spot cleaned
Trial period
30 days
Guarantee
10 years
Price bracket
Mid-range
Should you buy the Panda London Memory Foam Bamboo Pillow?
Buy it if…
✅ You sleep on your side or back: The Panda pillow is nicely supportive and does a good job of keeping the spine aligned in both these sleeping positions. And while there is a distinctive memory foam ‘hug’ to the pillow, you won’t sink in too far.
✅ You sleep hot: Memory foam is notorious for trapping heat, but the Panda’s bamboo cover did an excellent job of keeping my husband (a hot sleeper) cool and comfortable through the night.
✅ You suffer from neck pain: Once you get used to the feel of the Panda pillow, you will start to see how supportive it is around the neck. If you suffer with neck pain, this pillow could help to alleviate it.
Don’t buy it if…
❌ You’re a stomach sleeper: This pillow’s loft is too high for a stomach sleeper and is likely to throw the spine out of alignment if you’re lying on your front. Instead, consider a low loft foam pillow such as the 8cm Levitex Sleep Posture Pillow.
❌ You change positions a lot: The Panda pillow wasn’t always easy to change sleeping positions on, with the foam wanting to push a sleeper’s head back to their original sleeping position. A more traditional pillow, such as the Simba Stratos Pillow that’s filled with down-like fibre clusters, might be a better alternative.
❌ You’re not a fan of the memory foam ‘hug’: Even though the Panda has quite a light memory foam ‘hug’, it still contours and wraps itself around your head to a certain degree. For some people, this will just be a feeling that they can’t get used to; for more bounce and less hug, try the Origin Coolmax Latex Pillow.
How I tested the Panda London Memory Foam Bamboo Pillow
I slept on the Panda pillow for two weeks and also asked my husband and other friends to try the pillow out for differing opinions. There’s only one style and loft of pillow available. I tested the pillow during both a slightly chilly spell and an unseasonably warm patch, which gave a great indication of how the pillow performed in different temperatures. I also tested the pillow in a variety of sleeping positions to see which were most comfortable and supportive.
Samsung is reportedly planning to launch its own AI accelerator chip, the ‘Mach-1’, in a bid to challenge Nvidia‘s dominance in the AI semiconductor market.
The new chip, which will likely target edge applications with low power consumption requirements, will go into production by the end of this year and make its debut in early 2025, according to the Seoul Economic Daily.
The announcement was made during the company’s 55th regular shareholders’ meeting. Kye Hyun Kyung, CEO of Samsung Semiconductor, said the chip design had passed technological validation on FPGAs and that finalization of SoC was in progress.
Entirely new type of AGI semiconductor
The Mach-1 accelerator is designed to tackle AI inference tasks and will reportedly overcome the bottleneck issues that arise in existing AI accelerators when transferring data between the GPU and memory. This often results in slower data transmission speeds and reduced power efficiency.
The Mach-1 is reportedly a ‘lightweight’ AI chip, utilizing low-power (LP) memory instead of the costly HBM typically used in AI semiconductors.
The move is widely seen as Samsung’s attempt to regain its position as the world’s largest semiconductor company, fighting back against Nvidia which completely dominates the AI chip market and has seen its stock soar in recent months, making it the third most valuable company in the world behind Microsoft and Apple.
While the South Korean tech behemoth currently has no plans to challenge Nvidia’s H100, B100, and B200 AI powerhouses, Seoul Economic Daily reports that Samsung has established an AGI computing lab in Silicon Valley to expedite the development of AI semiconductors. Kyung stated that the specialized lab is “working to create an entirely new type of semiconductor designed to meet the processing requirements of future AGI systems.’
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
MemVerge, a provider of software designed to accelerate and optimize data-intensive applications, has partnered with Micron to boost the performance of LLMs using Compute Express Link (CXL) technology.
The company’s Memory Machine software uses CXL to reduce idle time in GPUs caused by memory loading.
The technology was demonstrated at Micron’s booth at Nvidia GTC 2024 and Charles Fan, CEO and Co-founder of MemVerge said, “Scaling LLM performance cost-effectively means keeping the GPUs fed with data. Our demo at GTC demonstrates that pools of tiered memory not only drive performance higher but also maximize the utilization of precious GPU resources.”
Impressive results
The demo utilized a high-throughput FlexGen generation engine and an OPT-66B large language model. This was performed on a Supermicro Petascale Server, equipped with an AMD Genoa CPU, Nvidia A10 GPU, Micron DDR5-4800 DIMMs, CZ120 CXL memory modules, and MemVerge Memory Machine X intelligent tiering software.
The demo contrasted the performance of a job running on an A10 GPU with 24GB of GDDR6 memory, and data fed from 8x 32GB Micron DRAM, against the same job running on the Supermicro server fitted with Micron CZ120 CXL 24GB memory expander and the MemVerge software.
The FlexGen benchmark, using tiered memory, completed tasks in under half the time of traditional NVMe storage methods. Additionally, GPU utilization jumped from 51.8% to 91.8%, reportedly as a result of MemVerge Memory Machine X software’s transparent data tiering across GPU, CPU, and CXL memory.
Raj Narasimhan, senior vice president and general manager of Micron’s Compute and Networking Business Unit, said “Through our collaboration with MemVerge, Micron is able to demonstrate the substantial benefits of CXL memory modules to improve effective GPU throughput for AI applications resulting in faster time to insights for customers. Micron’s innovations across the memory portfolio provide compute with the necessary memory capacity and bandwidth to scale AI use cases from cloud to the edge.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
However, experts remain skeptical about the claims. Blocks and Files pointed out that the Nvidia A10 GPU uses GDDR6 memory, which is not HBM. A MemVerge spokesperson responded to this point, and others that the site raised, stating, “Our solution does have the same effect on the other GPUs with HBM. Between Flexgen’s memory offloading capabilities and Memory Machine X’s memory tiering capabilities, the solution is managing the entire memory hierarchy that includes GPU, CPU and CXL memory modules.”
TikTok’s parent company ByteDance has reportedly quietly invested in Xinyuan Semiconductors, a Chinese memory chip manufacturer.
A report from Pandaily, a tech media site based in Beijing, the move reportedly positions ByteDance as the third-largest shareholder in the chip maker, holding an indirect stake of 9.5%.
A ByteDance spokesperson confirmed this previously undisclosed investment to Pandaily, stating its aim is to hasten the development of VR headsets. This move aligns with ByteDance’s growing interest in the VR sector, as it plans to take on Meta’s Quest and Apple‘s Vision Pro.
Pushing ahead into VR
Based in Shanghai and established in 2019, Xinyuan Semiconductors specializes in Resistive Random Access Memory (ReRAM) technology and related chip products. The company’s portfolio covers three major application areas: high-performance industrial control and automotive SoC and ASIC chips, Computing in Memory (CIM) IP and chips, and System-on-Memory (SoM) chips.
This investment in Xinyuan Semiconductors isn’t ByteDance’s first venture into the semiconductor industry. In 2021, the tech behemoth also invested in Moore Thread, a Chinese GPU manufacturer.
The company’s strategic investments signal a clear intent to compete in the VR space. TikTok is already available as a native app for Vision Pro.
But while this latest investment could potentially be setting the stage for a showdown with Apple and its Vision Pro headset, ByteDance has another far bigger battle on its hands right now.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The US House of Representatives recently passed a significant bill that could lead to a TikTok ban in America if the Chinese parent company fails to sell its controlling stake of the social media app within the next six months.
South Korean chipmaker SK Hynix, a key Nvidia supplier, says it has already sold out of its entire 2024 production of stacked high-bandwidth memory DRAMs, crucial for AI processors in data centers. That’s a problem, given just how in demand HBM chips are right now.
However, a solution might have presented itself, as reports say SK Hynix is in talks with Japanese firm Kioxia Holdings to jointly produce HBM chips.
SK Hynix, the world’s second-largest memory chipmaker, is a major shareholder of Kioxia, the world’s No. 2 NAND flash manufacturer, and if the deal goes ahead, it could see the high-performance chips being produced at facilities co-operated by Kioxia and US-based Western Digital Corp. in Japan.
A merger hangs in the balance
What makes this situation even more interesting is Kioxia and Western Digital have been engaged in merger talks, something that SK Hynix is opposed to over fears this would reduce its opportunities with Kioxia. As SK Hynix is a major shareholder in Kioxia, the Japanese firm and Western Digital’s merger can’t go ahead without its blessing, and this new move could be seen as an important sweetener to help things progress.
After the news of the potential deal broke, SK Hynix issued a statement saying simply, “There is no change in our stance that if there is a collaboration opportunity, we will enter into discussion on that matter.”
If the merger between Kioxia and Western Digital does proceed, it will make the company potentially the biggest manufacturer of NAND memory on the planet, leapfrogging current leader Samsung, so there’s a lot to play for.
The Korea Economic Daily says of the move, “The partnership is expected to cement SK Hynix’s leadership in the HBM segment, which it almost evenly splits with Samsung Electronics. Kioxia will likely transform NAND flash lines into those for HBMs, used for generative AI applications, high-performance data centers and machine learning platforms, contributing to a revival in Japan’s semiconductor market.”
The Nvidia GeForce RTX 5090 has long since been a hot topic in the tech rumor mill as possibly the best graphics card in the future market, with the latest ones revealing even more information about its memory specifications.
According to well-known and reliable Twitter hardware leaker Kopite7kimi, the RTX 5090 will feature a 512-bit memory bus that is 33% wider than the one on Nvidia’s RTX 4090. This would allow for higher levels of memory bandwidth and increased GPU memory capabilities. It could even have 32GB of VRAM thanks to its two 16GB GDDR modules.
There’s also a report that the RTX 5090 will have 28Gbps GDDR7 memory modules, 33% faster than the 21Gbps memory modules in the RTX 4090. But what’s most impressive is the memory bandwidth for the alleged card, an apparent 77% boost compared to the 4090 that would give it 1,800 GB/s of memory bandwidth. This checks out with a previous rumor stating that the card would be up to 70% faster than the 4090.
If this RTX 5090 turns out to be true, the specs would be absolutely mind-blowing and would nearly justify what would certainly be a massive price hike compared to current-gen Nvidia cards.
The RTX 5090 could be in trouble
What’s interesting is that other memory producers have been promoting 32Gbps GDDR7 modules, meaning that if these rumors are true then Nvidia is purposefully slowing down the graphics cards to reduce costs and increase yields. But thanks to the Super series refresh of Nvidia cards, we could possibly see an updated version without these restrictions.
However, there seems to be one major caveat, which is the release date. It seems that Nvidia’s 5090 will be released in 2025, meaning that the rumored Nvidia RTX 4090 Super and RTX Titan would have to keep Team Green’s lead until then. A prospect made much more difficult by AMD.
Team Red is prepping its own comeback with the mid-range RDNA 4 graphics cards, which are shaping up to be Nvidia RTX 4080 and 4060 Ti killers. Of course, it’s possible for it to also be released in 2025, but if it comes out before Team Green’s RTX 5000-series then Team Red will still have a major advantage.
We’ll have to wait and see what happens, but regardless we’re sure to have some great competition in the graphics cards market.