AMD is introducing two new adaptive SoCs – Versal AI Edge Series Gen 2 for AI-driven embedded systems, and Versal Prime Series Gen 2 for classic embedded systems.
Multi-chip solutions typically come with significant overheads but single hardware architecture isn’t fully optimized for all three AI phases – preprocessing, AI inference, and postprocessing.
To tackle these challenges, AMD has developed a single-chip heterogeneous processing solution that streamlines these processes and maximizes performance.
Early days yet
The Versal AI Edge Series Gen 2 adaptive SoCs provide end-to-end acceleration for AI-driven embedded systems, which the tech giant says is built on a foundation of improved safety and security. AMD has integrated a high-performance processing system, incorporating Arm CPUs and next-generation AI Engines, with top-class programmable logic, creating a device that expertly handles all three computational phases required in embedded AI applications.
AMD says the Versal AI Edge Series Gen 2 SoCs are suitable for a wide spectrum of embedded markets, including those with high-security, high-reliability, long lifecycle, and safety-critical demands. Purposes include autonomous driving, industrial PCs, autonomous robots, edge AI boxes and ultrasound, endoscopy and 3D imaging in health care.
The processing system of the integrated CPUs includes up to 8x Arm Cortex-A78AE application processors, up to 10x Arm Cortex-R52 real-time processors, and support for USB 3.2, DisplayPort 1.4, 10G Ethernet, PCIe Gen5, and more.
The devices meet ASIL D / SIL 3 operating requirements and are compliant with a range of other safety and security standards. They reportedly offer up to three times the TOPS/watt for AI inference and up to ten times the scalar compute with powerful CPUs for postprocessing.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Salil Raje, senior vice president of AMD’s Adaptive and Embedded Computing Group, said, “The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems. Backed by over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low-end to high-end.”
Early access documentation and evaluation kits for the devices are available now. The first silicon samples of Versal Series Gen 2 are expected at the start of next year, with production slated to begin late 2025.
There’s no shortage of startups pushing technology that could one day prove pivotal in AI computing and memory infrastructure.
Celestial AI, which recently secured $175 million in Series C funding, is looking to commercialize its Photonic Fabric technology which aims to redefine optical interconnects.
Celestial AI’s foundational technology is designed to disaggregate AI compute from memory to offer a “transformative leap in AI system performance that is ten years more advanced than existing technologies.”
Lower energy overhead and latency
The company has reportedly been in talks with several hyperscale customers and a major processor manufacturer, about integrating its technology. Though specific details remain under wraps, that manufacturer is quite likely to be AMD since AMD Ventures is one of Photonic Fabric’s backers.
As reported by The Next Platform, the core of Celestial AI’s strategy lies in its chiplets, interposers, and optical interconnect technology. By combining DDR5 and HBM memory, the company aims to significantly reduce power consumption while maintaining high performance levels. The chiplets can be used for additional memory capacity or as interconnects between chips, offering speeds comparable to NVLink or Infinity Fabric.
“The surge in demand for our Photonic Fabric is the product of having the right technology, the right team and the right customer engagement model”, said Dave Lazovsky, Co-Founder and CEO of Celestial AI.
“We are experiencing broad customer adoption resulting from our full-stack technology offerings, providing electrical-optical-electrical links that deliver data at the bandwidth, latency, bit error rate (BER) and power required, compatible with the logical protocols of our customer’s AI accelerators and GPUs. Deep strategic collaborations with hyperscale data center customers focused on optimizing system-level Accelerated Computing architectures are a prerequisite for these solutions. We’re excited to be working with the giants of our industry to propel commercialization of the Photonic Fabric.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
While Celestial AI faces challenges in timing and competition from other startups in the silicon photonics space, the potential impact of its technology on the AI processing landscape makes it a promising contender. As the industry moves towards co-packaged optics and silicon photonic interposers, Celestial AI’s Photonic Fabric could play a key role in shaping the future of AI computing.
Horizon Forbidden Westhas come to PC, and it’s given me another reason not to buy a PS5. I’ve bought every generation of PlayStation console since the OG model, but with Sony‘s shift to (belatedly) porting most of its exclusives to PC, it just doesn’t seem worth splashing out on a new console when I can just wait for the games I want to play to come to me.
So, I was very happy to hear that Horizon Forbidden West was going to be ported to PC. As a big fan of the original game, which I played on PS4, I’d been looking forward to playing it.
Of course, as a visually-impressive first-party game from Sony, I was also keen to see how it performed on our 8K rig. As you can see in the specs box on the right, our rig has remained largely unchanged for over a year. This is because it remains a formidable machine – and, crucially, the Nvidia GeForce RTX 4090 graphics card that does the bulk of the work when gaming has yet to be beaten. It remains the best graphics card money can buy.
With rumors swirling that Sony is planning on releasing a more powerful PS5 Pro console in the near future that could target 8K resolutions through a mix of more powerful hardware and upscaling technology, Horizon Forbidden West at 8K on PC may give us an idea of the kind of visuals future PlayStation games may offer.
It also suggests what obstacles Sony will face if the PS5 Pro will indeed target 8K resolutions. Despite being almost two years old, the RTX 4090 GPU still costs more than its original launch price, hovering around $2,000/£2,000. While the PS5 Pro will likely be more expensive than the standard PS5, there’s no way it’ll be even half the price of Nvidia’s GPU – and that’s before you add in the cost of the other PC components required. Basically, you can’t currently buy an affordable 8K gaming machine that is priced for mainstream success. That’s the scale of the challenge Sony faces.
(Image credit: Future)
Spoilt for choice
One of the best things about Sony’s initiative to bring its games to PC, apart from giving me an excuse not to spend money I don’t have on a PS5, is that they usually come with an excellent choice of PC-centric options, including support for upscaling technology from Nvidia and support for ultrawide monitors.
Horizon Forbidden West continues this streak, and the PC port has been handled by Nixxes Software, which has handled many previous PlayStation to PC ports.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
This latest release is particularly noteworthy as not only does it support DLSS 3 for Nvidia RTX graphics, but it also supports competing upscaling tech in the form of AMD FSR 2.2 and Intel XeSS.
All three of these features allow the game to run at a lower resolution, with the images upscaled so that the game appears at a higher resolution, but without the additional strain on your PC’s graphics card.
This mainly allows less powerful GPUs to hit resolutions with graphical effects enabled that they usually wouldn’t be able to handle. It also allows the mighty RTX 4090 to reach the demanding 8K resolution (7680 × 4320) in certain games while maintaining a playable framerate.
By supporting the three major upscaling tools, Horizon Forbidden West gives users much more choice (both FSR and XeSS work for a range of GPUs, while DLSS is exclusive to recent Nvidia GPUs) – and it also gives me a chance to see which upscaling tech performs the best.
(Image credit: Sony)
First up: DLSS
First, I played Horizon Forbidden West at the 8K resolution of 7680 × 4320 and the graphics preset at ‘Very High’ – which is the highest quality on offer. With DLSS turned off (so the game is running at native 8K), my 8K test rig managed to run Horizon Forbidden West at an average of 32 frames per second (fps).
Considering that this is a graphically-intensive game and running at the highest graphics and at a resolution that’s pushing around 33 million pixels, this is very impressive, and is a testament to the raw power of the RTX 4090, the rest of the components inside the rig built by Stormforce Gaming, and the talents of Guerrilla Games (developers of the game) and Nixxes Software.
I feel that 30fps is the minimum frame rate for a playable game, so if you wanted to play Horizon Forbidden West at a native 8K resolution, that’s certainly possible. If you drop the graphics preset, then the frame rate will go up – though at the cost of graphical fidelity.
Of course, you don’t spend around $2,000 on a GPU to get 32fps in a game, so I turned on DLSS and set it to ‘Quality’, which minimizes the amount of upscaling performed to preserve image quality as much as possible. This led the average framerate to jump to 45fps, with a maximum frames per second of 60.7fps.
One thing to note with my results, which you can view in the chart above, is that because Horizon Forbidden West doesn’t have a built-in benchmark tool, I had to play the same section over and over again, using MSI Afterburner to record my framerate. I chose a section of the game with large open spaces, water effects and a combat encounter, and I tried to make each playthrough, lasting around eight minutes, as similar as possible. However, my playthroughs weren’t identical, as some things, such as enemy attacks, would change, and this explains why there are some discrepancies between results. Still, it should give you a good idea of the difference each setting makes.
Next, I turned ‘Frame Generation’ on. This is a new feature exclusive to DLSS 3 and Nvidia’s RTX 4000 series of cards. It uses AI to generate and insert frames between normal frames rendered by the GPU. The goal is to make games feel even smoother with higher, more consistent framerates while maintaining image quality.
As the chart shows, this gave the game another bump in frames per second. I then tested the other DLSS settings with Frame Generation left on.
With DLSS set to Ultra Performance, I hit 59.3fps at 8K – basically the 60fps goal I aim for in these tests, which offers a balance of image quality and performance. With Ultra Performance, the RTX 4090 is rendering the game at a much lower resolution, then using DLSS to upscale to 8K, and this reliance on upscaling can lead to an image quality that can suffer from a lack of sharpness and detail, and graphical artifacts. The good news is that DLSS 3 is a big improvement over previous versions, and the hit to graphic quality is far less noticeable these days.
So, thanks to DLSS, you can indeed play Horizon Forbidden West at 8K. But how does AMD and Intel’s rival technologies cope?
(Image credit: Sony Interactive Entertainment)
AMD FSR 2.2 tested
AMD’s FSR 2.2 technology isn’t as mature as Nvidia’s DLSS 3, but it has a noteworthy feature that DLSS lacks: it’s open source and doesn’t just work with AMD graphics cards – Nvidia and Intel GPUs can make use of it as well.
This makes it far more accessible than DLSS, which is exclusive to new and expensive Nvidia GPUs, and for many people this flexibility makes up for any shortfall in performance.
As you can see from my results above, FSR 2.2 provides a decent jump in frame rates compared to running Horizon Forbidden West natively at 8K, though at each quality setting, it doesn’t quite keep up with DLSS 3’s results.
The best results I managed was with FSR set to ‘Ultra Performance’, where it hit 55.2fps on average. Below DLSS 3’s best results, but certainly not bad, and close to doubling the performance of the game compared with playing it natively.
As well as being unable to hit the same highs as DLSS 3, AMD FSR 2.2’s image quality at Ultra Performance isn’t quite as good as DLSS 3 at similar settings, with a few instances of shimmering and ghosting becoming noticeable during my playthrough.
(Image credit: Sony)
Intel XeSS results
Finally, I tested out Intel’s XeSS technology. While there is a version of XeSS designed to run with Intel Arc graphics cards, as with FSR you can use XeSS with various GPU brands, so there is yet another upscaling tool that gamers can try out. As with most things, the more choice there is for consumers, the better.
XeSS hasn’t been around for as long as DLSS or FSR, and as you can see from the results above, it wasn’t able to match either of Nvidia or AMD’s solutions. There’s no ‘Ultra Performance’ mode either, so XeSS hits its highest framerates with XeSS set to ‘Performance’, with an average of 50.6fps. This leads to a perfectly playable experience at 8K, but it’s noticeably more sluggish than when playing with DLSS at Ultra Performance.
However, it still gives you a decent fps bump over native 8K, and with Intel being one of the biggest proponents of artificial intelligence, I’m pretty confident that XeSS performance will improve as the technology matures. The fact that you can use it with GPUs from Intel’s rivals is also a big plus.
(Image credit: Sony)
Conclusion: DLSS for the win (again)
Once again, DLSS 3 has proved to be the best way of getting a game to run at 8K and 60fps with minimal compromises.
Not only did it allow the RTX 4090 to hit 59.3fps on average while playing Horizon Forbidden West, but it also looked the best with minimal impact to image quality.
This may not come as too much of a surprise – DLSS has been around for quite a while now, and Nvidia has been putting a lot of work into improving the technology with each release.
Also, while Nvidia’s preference for proprietary tech means you need the latest RTX 4000 series of GPUs to get the most out of it, this does at least mean Team Green can make use of exclusive features of its GPUs such as Tensor Cores. With AMD and Intel’s more open implementations, they are unable to target specific hardware as easily – though FSR and XeSS are available to a much wider range of PC gamers.
And, while FSR doesn’t quite match DLSS performance with Horizon Forbidden West, it comes close, and if you don’t have an Nvidia GPU, this is a fine alternative. As for XeSS, it shows plenty of promise.
So, upscaling tech has made gaming at 8K on PC achievable, and it’s great to see increased choices for users. So, if Sony is indeed working on a PS5 Pro that aims to run games like Horizon Forbidden West at 8K, it’s going to have to come up with its own upscaling tech (or adapt FSR or XeSS) if it wants to compete.
The infamous Rowhammer DRAM attack can now be pulled off on some AMD CPUs as well, academic researchers from ETH Zurich have proved.
As reported by BleepingComputer, the researchers dubbed the attack ZenHammer, after cracking the complex, non-linear DRAM addressing functions in AMD platforms.
For the uninitiated, the Rowhammer DRAM attack revolves around changing data in Dynamic Random Access-Memory (DRAM), by repeatedly “hammering”, or accessing, specific rows of memory cells. Memory cells keep information as electric charges. These charges determine the value of the bits, which can either be a 0, or a 1. As the density of the memory cells in today’s chips is fairly big, “hammering” can alter the state in adjacent rows, or “flip” the bit. By flipping specific bits, the attackers can pull cryptographic keys, or other sensitive data, BleepingComputer explained.
Purely theoretical?
This means that AMD has joined Intel and ARM CPUs who were already known to be vulnerable to hammering attacks.
The researchers tested their theory on different platforms. For AMD Zen 2, they were successful 70% of the time. For AMD Zen 3, 60%. For AMD Zen 4, however, they were only successful 10% of the time, suggesting that “the changes in DDR5 such as improved Rowhammer mitigations, on-die error correction code (ECC), and a higher refresh rate (32 ms) make it harder to trigger bit flips.”
While usually academic research is purely theoretical, the researchers said this attack could be pulled off in the real world, too. They simulated successful attacks targeting the system’s security, and manipulating page table entries for unauthorized memory access.
Those fearing ZenHammer, it’s important to stress that these types of attacks are quite difficult to pull off. What’s more, there are patches and mitigations. Earlier this week, AMD released a security advisory with mitigation options.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
AMD puts out some of the best processors on the market that can compete with Intel’s, including the best cheap processors that are sure to play gently with your wallet. And right now Walmart is offering an excellent deal on a Ryzen 5 processor, bringing it down to a budget-friendly price.
If you’re interested in getting your hands on an excellent gaming processor that’s wallet-friendly, then you’ll want to take advantage of this fantastic sale while you still can.
Today’s best AMD Ryzen 5 7600 CPU deal
The AMD Ryzen 5 7600 is one of the best AMD processors when it comes to pricing versus performance. It features six cores that, due to AMD Simultaneous Multithreading (SMT), are effectively doubled to 12 cores. It also has 32 MB of L3 cache and runs at 3.8 GHz but can boost up to 5.1 GHz.
There are two downsides. First, it lacks 3D V-cache. That’s a method that stacks cache vertically – tripling the CPU’s L3 cache – leading to faster calculations and a nice speed increase. The second is that it tends to run hot for a budget processor, so you’ll need a solid heat sink or general ventilation.
If you’ve been following the somewhat curious tale of the global launch of the RX 7900 GRE – the ‘Golden Rabbit Edition’ graphics card that was initially exclusive to China – you may recall it was artificially limited to 2.3GHz for the memory clock speed by a bug, as confirmed by AMD. Apparently, this was an issue with an incorrect memory tuning limit.
Well, that glitch has now been remedied with AMD’s new Adrenalin Edition 24.3.1 driver. As Team Red says in the release notes, there’s a fix for the “maximum memory tuning limit [being] incorrectly reported on AMD Radeon RX 7900 GRE graphics products.”
VideoCardz noticed this development and reports that with the new driver, Tech Powerup has found it can ramp up the memory clock by 300MHz, giving a sizeable leap in performance. Running at 2.6GHz rather than 2.3GHz results in a 15% boost in 3DMark (Time Spy).
Note that this is the memory clock, which is distinct from the GPU clock speed, and not to be confused with that. The GPU chip is also limited for overclocking as our previous report highlighted, but AMD hasn’t taken action on that front.
Analysis: Going GREat guns
This is just one synthetic test, so we need to be a bit cautious, but other benchmarking online from Hardware Unboxed shows similarly impressive results (in gaming tests, and a host of them, too).
However, we should point out that other reports online suggest that RX 7900 GRE owners are far from guaranteed to be able to run a VRAM overclock as ambitious as 2.6GHz, or even get past 2.5GHz (for that matter, cresting 2.4GHz is proving challenging for some graphics cards anecdotally).
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
As ever with any chip, the mileage you’ll get out of your video memory will be different from others – but even so, everyone should be able to realize a worthwhile performance benefit here. If not 10-15%, there should still be some decent headroom now AMD has fixed this bug, and many folks are reporting around a 5% boost or close to that at the very least.
With this extra chunk of frame rates under its belt, the 7900 GRE is now looking an even more tempting proposition. Assuming, of course, that you’re confident enough with PC hardware to engage in overclocking shenanigans – not everyone will want to do so.
The RX 7900 GRE was already a great mid-range performer before this happened, anyway, and at its current price, this seems to be the best GPU in this price bracket now, for those willing to push it with an overclock, certainly. It’s looking better than the rival RTX 4070 Super with this new AMD driver, and the RX 7900 GRE is about 7% cheaper than Team Green’s graphics card going by current pricing on Newegg in the US (for the cheapest models in stock).
Relative pricing may be a different story in your region, but you get the point. Also, with the 7900 GRE being within 10% of the performance of the much pricier 7900 XT now, as Hardware Unboxed points out, it’s a possible alternative to the latter.
We’d be remiss to mention that with the Nvidia RTX 4070 Super comparison, you are losing out on the ray tracing and DLSS 3 front, of course – but for pure rasterization it’s the 7900 GRE all the way as pricing stands, with this extra driver boost. Nvidia and its partners may need to respond here…
Microsoft recently brought in a new feature for DirectX 12 that could considerably improve gaming performance, and thanks to AMD, we have a clearer idea of what it might mean for frame rates in the future.
This is Work Graphs, a tech previously in testing but now officially introduced for DX12. It’s a feature that aims to reduce CPU bottlenecking effects on PC games, shifting some of the heavy processing work from the processor to the GPU. (And reducing communication between the CPU and graphics card, cutting out overheads in that respect, too).
As Tom’s Hardware noticed, AMD has announced that it’s using draw calls and mesh nodes as part of Work Graphs to usher in better gaming performance, and has provided us with an idea of what impact those advancements will (or should) have.
We are told that a benchmark using an RX 7900 XTX GPU produced a major performance gain for Work Graphs with mesh shaders compared to the frame rate without the shaders. Indeed, in the latter case performance was 64% slower.
This is while “using the AMD procedural content Work Graphs demo with the overview, meadow, bridge, wall, and market scene views.”
AMD has a demo video to watch in its GDC 2024 blog post on this development.
Analysis: Managing expectations
This is exciting stuff on the face of it – we’ve already been tempted by the sound of Work Graphs and how it could pan out by comments from a couple of game developers. This feature has serious potential for faster fps and this benchmark from AMD illustrates the gains that may be in the pipeline – may being the operative word.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
As ever, a single benchmark on a particular test system is of limited use in gauging the wider implications of Work Graphs, so let’s keep a level head here. As we’ve mentioned previously, there could be a broad range of gains dependent on all sorts of factors, and we have to remember that Work Graphs itself does use resources to function – this isn’t a free boost.
But by all accounts, it could be a large step forward, and Work Graphs is an enticing prospect for the future of smoother gaming, there’s no denying that. More testing and benchmarking, please…
South Korean chipmaker SK Hynix, a key Nvidia supplier, says it has already sold out of its entire 2024 production of stacked high-bandwidth memory DRAMs, crucial for AI processors in data centers. That’s a problem, given just how in demand HBM chips are right now.
However, a solution might have presented itself, as reports say SK Hynix is in talks with Japanese firm Kioxia Holdings to jointly produce HBM chips.
SK Hynix, the world’s second-largest memory chipmaker, is a major shareholder of Kioxia, the world’s No. 2 NAND flash manufacturer, and if the deal goes ahead, it could see the high-performance chips being produced at facilities co-operated by Kioxia and US-based Western Digital Corp. in Japan.
A merger hangs in the balance
What makes this situation even more interesting is Kioxia and Western Digital have been engaged in merger talks, something that SK Hynix is opposed to over fears this would reduce its opportunities with Kioxia. As SK Hynix is a major shareholder in Kioxia, the Japanese firm and Western Digital’s merger can’t go ahead without its blessing, and this new move could be seen as an important sweetener to help things progress.
After the news of the potential deal broke, SK Hynix issued a statement saying simply, “There is no change in our stance that if there is a collaboration opportunity, we will enter into discussion on that matter.”
If the merger between Kioxia and Western Digital does proceed, it will make the company potentially the biggest manufacturer of NAND memory on the planet, leapfrogging current leader Samsung, so there’s a lot to play for.
The Korea Economic Daily says of the move, “The partnership is expected to cement SK Hynix’s leadership in the HBM segment, which it almost evenly splits with Samsung Electronics. Kioxia will likely transform NAND flash lines into those for HBMs, used for generative AI applications, high-performance data centers and machine learning platforms, contributing to a revival in Japan’s semiconductor market.”
We’ve written previously about tinybox. The $15,000 AI server system is powered by AMD Radeon RX 7900 XTX graphics cards and can reportedly deliver 37% of Nvidia H100 compute performance.
It seems however, that the creators of tinybox have run into problems with bugs affecting the Radeon-based platform. After parent company tiny corp posted several tweets expressing frustration with AMD’s AI acceleration toolkit – in which it cheekily tagged AMD rivals Nvidia and Intel – AMD’s CEO, Lisa Su, stepped in, saying her team was working to fix the issues.
Unfortunately, the fixes weren’t good enough for tiny corp, which fired off another round of frustrated tweets, asking AMD to “fix their basic s*t” and suggesting the tech giant open source its firmware so that the tiny startup could do what AMD seemed incapable of – namely “fix their LLVM spilling bug and write a fuzzer for HSA”.
If AMD open sources their firmware, I’ll fix their LLVM spilling bug and write a fuzzer for HSA. Otherwise, it’s not worth putting tons of effort into fixing bugs on a platform you don’t own. https://t.co/c4I2So27YGMarch 5, 2024
See more
The dilemma
After things got even more heated, Su tweeted “Thanks for the collaboration and feedback. We are all in to get you a good solution. Team is on it.”
While that could be good news for tiny corp – time will tell – Su could well face a backlash for essentially stepping in to support the use of its consumer products in a server aimed at the enterprise.
As jlake3 commented over on Tom’s Hardware, “tinybox is buying consumer cards instead of datacenter models, but seems to expect a datacenter SLA? They got revised firmware within 6 hours of the earliest linked tweet and a call with engineering the next day, which is more than I’d think a startup buying a bunch of gaming cards would qualify for when there are actual paying enterprise clients, and they appear to be having a public meltdown that AMD isn’t doing more for a startup with less than 100 servers built and none yet shipped (and expressly avoiding using pro GPUs).”
He also made another good point. “As for Nvidia, their EULA forbids using GeForce products for datacenter CUDA applications, so they’d definitely not be dedicating anyone to talk to tinybox in this situation, except maybe a lawyer.”
Spartan UltraScale+ is the latest addition to AMD‘s extensive portfolio of cost-optimized Field Programmable Gate Arrays (FPGAs) and adaptive SoCs. It has been introduced to replace the Xilinx Spartan 6 and Spartan 7 lines.
The new Spartan UltraScale+ devices are designed for a wide range of I/O-intensive applications at the edge. AMD says its latest FPGAs can deliver up to 30 percent lower total power consumption compared to the previous generation – energy efficiency is a hot topic right now – while boasting the most robust set of security features in the AMD’s cost-optimized portfolio.
“For over 25 years the Spartan FPGA family has helped power some of humanity’s finest achievements, from lifesaving automated defibrillators to the CERN particle accelerator advancing the boundaries of human knowledge,” said Kirk Saban, corporate vice president, Adaptive and Embedded Computing Group, AMD. “Building on proven 16nm technology, the Spartan UltraScale+ family’s enhanced security and features, common design tools, and long product lifecycles further strengthen our market-leading FPGA portfolio and underscore our commitment to delivering cost-optimized products for customers.”
Into the 2040s… and beyond!
The Spartan UltraScale+ FPGAs offer a number of state-of-the-art security features, including support for Post-Quantum Cryptography with NIST-approved algorithms to provide robust IP protection against ever-evolving threats. They also include a physical unclonable function, providing each device with a unique fingerprint for added security.
The Spartan UltraScale+ FPGA family sampling and evaluation kits are expected to be available in the first half of 2025, with tools support- starting with the AMD Vivado Design Suite – in the fourth quarter of 2024.
What about that super-long lifecycle being promised? AMD says the Spartan UltraScale+ FPGA will be supported into the 2040s, and this is just the standard lifecycle. AMD will likely offer an extended lifecycle on top of that (as it has with past FPGAs), which will take the chip’s support well into the future.
That might seem like some serious generosity on AMD’s behalf, but as Serve The Home explains, “Spartan FPGAs are often in products that take years to design and then are sold and used for decades in the future.”