Lenovo has taken the wraps off its sleek, lightweight, and power-packed ThinkPad P1 Gen 7.
The new laptop supports up to a Core Ultra 9 185H CPU and users can choose between integrated Intel Arc graphics, Nvidia RTX 1000/2000/3000 Ada Generation GPUs, or an Nvidia GeForce RTX 4060/4070 GPU, allowing it to handle most AI processing needs.
The ThinkPad P1 Gen 7 is the first mobile workstation to come with LPDDR5x LPCAMM2 memory, with a capacity of up to 64GB. It can accommodate 2 x PCIe 4×4 M.2 2280 SSDs for up to 8TB storage.
Crafted from premium aluminum, the laptop comes with a 16-inch display with a 16:10 aspect ratio, and narrow bezels providing a 91.7% screen-to-body ratio. Lenovo offers a choice of three displays – FHD+ IPS, QHD+ IPS, and UHD+ OLED Touch.
Connectivity comes in the form of Wi-Fi 7 and Bluetooth 5.3, and the laptop sports 2 x Thunderbolt 4 ports, 1 x USB-C (10Gbps), 1 x USB-A (5Gbps), an SD Express 7.0 card reader, HDMI 2.1, and an audio jack.
The new device features a liquid metal thermal design (in select configurations) which Lenovo says “enhances cooling performance and long-term reliability, catering to critical workflows when complex tasks require maximum performance for extended periods.”
Battery life is always important in laptops, and the ThinkPad P1 Gen 7 has a 90Whr customer replaceable unit.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“Lenovo’s latest ThinkPad P series mobile workstations are taking a significant step forward by featuring cutting-edge Intel Core Ultra processors equipped with a dedicated neural processing engine,’ said Roger Chandler, Vice President and General Manager, Enthusiast PC and Workstation Segment, Intel.
“Designed to enhance AI PC capabilities on laptops, this technology also improves performance, power efficiency and enables superior collaboration experiences, allowing users to be creative for longer periods without the need for constant charging.”
ThinkPad P1 Gen 7 will be available from June 2024, with prices starting at $2,619.
As the AI computing revolution hits its Apex, Nvidia has confidently stated that GPU technology is the way forward instead of dedicated NPUs in a new meeting.
As reported by Videocardz, a new meeting held by Nvidia about the “Premium AI PC” has seen Team Green confidently back its GPU technology against the current crop of NPUs hitting the scene. While “Basic AI” NPU-based machines are capable of up to 45 TOPs, through RTX, that can be expanded to 1,300+ TOPs (a 2,788% increase).
It’s what’s been described as the “iPhone moment of AI” where the technology has finally broken into the mainstream, much as Apple‘s handsets brought light to the smartphone. That’s because AI has become a part of gaming, video production, productivity, development, and creativity, as well as everyday computing, too.
It’s part of the AI Accelerator Landscape, as leaked by Benchlife.info, which states the company’s intentions for Heavy AI cloud-based GPUs with “1000s of TOPs” and large-language scalable models. As for exactly what the “Premium AI PC” experience entails, it encompasses 500 games and software, and an optimized software stack.
What’s more, Nvidia has cornered the market with an install base of over 100 million users across a myriad of devices. It’s not exactly surprising news given the company’s moves within the AI sector and stock explosion seen in the last year. That’s to say nothing of DLSS which has been core to the user experience since 2018.
Essentially, the company is taking a victory lap over the burgeoning NPU market which is available in everything from Apple M3 silicon to Intel Meteor Lake processors. While the upcoming Intel Lunar Lake is rumored to launch with 100 TOPs with its NPU, that’s simply dwarfed by what Nvidia is claiming from its “premium” user experience.
A glimpse of things to come at Computex
Nvidia has been making moves in the AI field with everything from upscaling in video games to local language chatbots with Chat with RTX, to robots using its tech. While the company has made some bold claims with its massive userbase, and theoretical throughput, these figures are likely just a taste of what’s to come next month at Computex 2024.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
It’s not entirely surprising given the recent unveiling of Nvidia’s Blackwell B200 die which has been dubbed: “the world’s most powerful chip”. Exactly how this factors into gaming remains to be seen with the company’s rumored RTX 50 series coming at the end of the year. One thing’s for sure, however, AI will play a part in the GPU line’s future.
There’s a new Nvidia GeForce RTX 4060 on the block and while it might not be the best graphics card out there, it has one massive advantage: it’s adorably cat-themed.
The card is currently available through one of Nvidia’s Chinese board partners, ASL, and can be shipped out to the US. Even better, the price is less than a normal RTX 4060. The design is a collaboration between ASL and SupremoCat, a Chinese cartoon brand, and features Wuhuang and Bazhahey (a cat and pug duo) wearing sunglasses and placed at the center of each fan. To top it off, the card is a pretty pastel pink, giving it even cuter vibes.
If you’re in the US, you can order this card on AliExpress for $374.99, which isn’t too shabby. And if you sign up for an AliExpress Choice subscription you can get free delivery on your order, which is an even better incentive considering how slow and expensive international shipping can be.
Make gaming PCs and peripherals cuter, please
I’ve been shouting from the digital rooftops for years that we need more cute tech to combat the dreaded gamer aesthetic that’s still popular with manufacturers to this day. Seeing this adorable collab birth such a cute graphics card is music to my eyes and I want to see more of this in the future.
Gaming setups are normally incredibly boring, a generic mix of RGB lighting trying to liven up black computers and accessories. But having unique colors, designs, shapes, and more does far more to add real personality and distinctiveness to a gamer space. I want people to immediately see my obsession with sickeningly cute animals the minute they lay eyes on my desk, not have strobe lights flash in their eyes.
So yes, I want more pink and purple laptops and keyboards with cat and dog keycaps, and there’s definitely a market out there for people like me.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The MSI Claw has been in a world of trouble ever since its launch, with plenty of reviews and buyers both criticizing it for its inconsistent gaming performance, poor optimization, and more. But now it seems the handheld’s luck is finally looking up, thanks to a brand new update.
According to MSI, its latest GPU and BIOS driver updates — the E1T41IMS.106 BIOS (referred to as 106) and 31.0.101.5445 GPU driver (referred to as 5445) — have increased performance by up to 150%. The update also apparently smooths out performance issues, allowing players to “smoothly play the top 100 popular games on the Steam platform.”
MSI seems to be very aware of the optimization issues concerning the CPU, as it mentions working with Intel to stabilize the handheld better. The updates also let users update straight from the Windows environment without having to use a USB drive or dock the system.
You can download the latest BIOS from the MSI official website and the newest GPU drivers from the Intel ARC official website.
Is it too little too late?
While the new updates and continued support are admirable, users are finally getting them over a month after its official release. Meanwhile, other PC gaming portables like the Steam Deck and Lenovo Legion Go work right out of the box with solid and stable performance, especially the former which continues to be the gold standard.
Not to mention how expensive the MSI Claw is, which makes it more concerning that it launched in such a state and needs to be continuously patched and updated to even meet the standard of an older and less powerful portable like the Steam Deck.
At this point, though MSI and Intel are working together, we still don’t know whether it boils down to Intel’s processors or MSI-related issues, though the upcoming launch of the Intel-equipped AOKZOE A2 Ultra should give us a better sense of where the problem lies.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Hopefully, MSI learns from this and if it releases a Claw 2, it ensures that the OS and general performance are up to snuff before its launch.
A prominent hardware leaker has uncovered new patch notes revealing that the now-allegedly canceled AMD Navi 4X / 4C graphics cards would have been significantly more powerful than the current AMD flagship RX 7900 XTX.
Uncovered by Kepler_L2 (via Tweaktown), new patch notes for AMD GFX12 supposedly showcase Navi 4X die models, the newer equivalent of the Radeon RX 7900 XTX, which would feature up to 50% more shader engines, however, it’s not looking likely that anything from RDNA 4 will be as quick as has been touted here.
Specifically, the patch notes reveal that Navi 4X / 4C die GPUs would have featured nine shading engines which is a significant upgrade over the six available from Navi 31 for a significant boost. RNDA 4 appears to be targeting the value crowd, so while the tech could have technically rivaled leading models from Intel, it’s likely the top-end was cut due to wanting to keep the prices competitive.
It calls back to an earlier leak at the end of last year as the supposed RX 8900 XTX design had reportedly leaked. Documentation from Moore’s Law is Dead showcased the Navi 4C config overview with an alleged patent for complex GPU architecture revealing up to 12 dies in parallel without a central or master die.
According to Videocardz, AMD decided to cancel the highest-end RDNA 4 GPU but no reason was offered. To speculate, this could have all come down to pricing. We’ve seen Nvidia‘s mid-range and top-end cards explode in MSRP in the generational gap between Ampere and Ada, so it’s possible that Team Red wanted to avoid this from happening.
Cost is king in the new GPU market
While we’re champions of bleeding-edge hardware, it’s important to remember that the top-end will always be a luxury few can afford. There’s no question that the RTX 4090 is the best graphics card from a raw technical perspective, but it outprices the RX 7900 XTX by nearly $600 at MSRP.
For AMD to compete at the top-end, as it sounds like the 4C GPU could have, we would likely have seen prices creeping up past the $1,000 mark, which its RX 7900 XTX avoided. Until the release of the RTX 4080 Super, AMD had cornered the mid-range market with its line of 1440p and 4K graphics cards for gamers, and losing that edge to compete on a power front likely would have done more harm than good.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The time has never been better to consider a new mid-range graphics card now that AMD made its latest GPU available worldwide. Naturally, potential buyers are going to compare the RX 7900 GRE vs RTX 4070, given their close proximity in price, but it’s also important to consider what they each offer in terms of performance, features, and overall value for your money.
The AMD Radeon RX 7900 GRE is a curious graphics card, launching in February 2024 after being launched exclusively in China back in June 2023, but it’s now available to the rest of the world and offers phenomenal performance for a far more palatable price tag than many of the best graphics cards on the market right now. With the promise of the Big Navi 31 die and 16GB VRAM under the hood, it even proves itself to be one of the best 4K graphics cards for those who are on a tighter budget but want some of that sweet, sweet 2160p gaming (with appropriate settings tweaks).
The RTX 4070 has had something of a resurgence recently, having been effectively replaced by the RTX 4070 Super back in January of this year. The latter card offered 20% more CUDA cores at the same $599 price point, essentially getting you significantly more performance without the need to splash out. But that release also dropped Nvidia‘s suggested retail price of the Nvidia RTX 4070 by about 10%, making it an even stronger contender for the title of best 1440p graphics card.
But there’s more to it than all that, and while we’re fond of both mid-range GPUs, we wouldn’t be TechRadar if we didn’t dig deep into our test bench to put each of these fan-favorites to our extensive battery of performance tests to find out which one comes out on top.
RX 7900 GRE vs RTX 4070: Price
(Image credit: Future / John Loeffler)
When weighing up a mid-range graphics card, the pricing is always paramount. The brand new RX 7900 GRE comes in clutch with an advantage straight out of the gate with its recommended retail price of just $549 / £529.99 / AU$929. In contrast, the RTX 4070 launched at $599 / £589.99 / AU$999 for the Founders Edition model, but has also dropped down to $549 / £529.99 / AU$929 in February.
What’s particularly aggressive about the RX 7900 GRE is how it slots into the current RDNA 3 lineup, being just $50 more than the excellent AMD RX 7800 XT from last year. In comparison, the RTX 4070 is quite the jump up from its sibling, the Nvidia RTX 4060 Ti, which starts at $399 in the US.
On price alone, then, this would appear to be a tie between the RX 7900 GRE vs RTX 4070, but price isn’t the same thing as value, and in the case of the RX 7900 GRE, it’s overall performance-to-price ratio after our testing reveals that it gives you more for that investment, giving it the decisive edge here.
Winner: AMD Radeon RX 7900 GRE
RX 7900 GRE vs RTX 4070: Specs
(Image credit: Future / John Loeffler)
AMD’s RX 7900 GRE immediately stands out as a mid-range GPU by offering 16GB GDDR6 VRAM on a 256-bit memory bus. It’s built on the AMD Navi 31 die with a total of 5,120 Stream Processors. By comparison, the RTX 4070 is built on the AD104 die with a total of 5,888 CUDA cores and 12GB GDDR6X memory on a 192-bit memory bus.
There we see the immediate architectural differences between mid-range RDNA 3 and Ada respectively. The former opts for a larger memory bus with a higher amount of slower memory, compared to the latter’s lower memory pool that’s considerably quicker. Despite their differences, the two models are remarkably close in total bandwidth available with the GRE boosting 576 GB/sec to the RTX 4070’s 504.2 GB/sec.
That slight lead in bandwidth comes at the price of increased power draw, however. That’s because the 7900 GRE has a 260W TDP compared to the 4070’s 200W TDP, for a difference of 30%. Ada’s architecture is therefore considerably more efficient than its rival. With that said, the red corner wins favor by fitting its latest GPU out with the standard two 8-pin PCIe connectors with no need for the likes of a 16-pin adapter, though.
The design languages of both are similar at base level. The 7900 GRE and 4070 are both dual-slot GPUs which means they should be ideal for a smaller form factor mini-ITX build just as much as a wider tower.
It’s worth noting that the GRE uses the same chip as the AMD RX 7900 XT and AMD RX 7900 XTX, just with substantially less VRAM and a smaller memory bus to utilize, and the memory interface matches that of the RX 7800 XT, even with slightly slower memory speed. It’s interesting to see how Team Red has effectively repurposed the Navi 31 silicon for a value play, though.
In the end, then, AMD’s offering has more VRAM, a larger memory bus, and a higher bandwidth for less money, so for that reason, we’re giving Team Red the edge here.
Winner: AMD Radeon RX 7900 GRE
RX 7900 GRE vs RTX 4070: Performance
(Image credit: Future / John Loeffler)
It all comes down to performance as to whether you should invest in either an RX 7900 GRE or the RTX 4070, and here we’ve done an unhealthy amount of benchmarking and testing on these two cards and collected reams of data to help highlight what you can expect from each. Measuring their general performance using synthetic benchmarks, some things become pretty clear right out the gate.
The lead taken by the RX 7900 GRE extends into our industry-standard synthetic benchmarks as well.
Taking 3DMark Sky Diver and Fire Strike Extreme as prime examples, AMD’s GPU achieved scores of 169,170 and 27,595 when compared to Nvidia’s offering of 120,719 and 20,457, trailing behind by quite a significant margin.
This is also true with the ray tracing-focused Port Royal with the RX 7900 GRE scoring 11,768 against the RTX 4070’s 10,415. The RTX 4070 does manage to score well ahead of the RX 7900 GRE when it comes to raw compute performance though, so data scientists and ML researchers are going to want to opt for the Nvidia RTX card.
Where we deviate from the script somewhat is with creative workloads where the Nvidia GPU pulls out some very solid wins.
This is most evident in Blender 4.0 with the Monster, Junkshop, and Classroom benchmarks with respective scores of 2,657, 1,267, and 1,332. There’s no other way to phrase this, the GRE just can’t match the output with scores of 1,252, 623, and 618, respectively.
In AdobePhotoshop, AMD does manage a better showing, with the RX 7900 GRE beating the RTX 4070 in the PugetBench for Creators 1.0 Photoshop benchmark, 10,650 to 9,695, though the RTX 4070 beats the RX 7900 GRE in Adobe Premiere, getting a score of 12,317 to the RX 7900 GRE’s 11,200 in PugetBench. The RX 7900 GRE also manages to encode 4K video into 1080p about 10 FPS faster than the RTX 4070, though both still perform exceptionally well in this test.
But these are really gaming graphics cards at the end of the day, and putting the RX 7900 GRE vs RTX 4070 across several gaming benchmarks really did surprise us.
These cards have more than enough resources to game well above 1080p, but for those who might be using older monitors or might want to bring things down to full HD to take advantage of faster frame rates will have a lot to like about both cards, but the RX 7900 GRE still comes out well ahead of the RTX 4070.
In Cyberpunk 2077 at max settings without RT or upscaling enabled, the RX 7900 GRE manages to pull out a blazing fast 151 FPS compared to the RTX 4070’s 97 FPS, a difference of about 55% in the RX 7900 GRE’s favor. The RX 7900 GRE is even able to match the RTX 4070’s ray-tracing performance in Cyberpunk 2077 (both scoring 46 FPS with Psycho RT on Ultra quality), though notably, the RX 7900 GRE’s minimum FPS is nearly twice that of the RTX 4070’s, so you’ll get much smoother gameplay overall.
In total, the RX 7900 GRE averages about 111 FPS at 1080p, compared to the RTX 4070’s 82 FPS, a difference of about 35% in the RX 7900 GRE’s favor.
Starting with AMD’s latest offering, the RX 7900 GRE was able to achieve 60 FPS average in Metro Exodus with Extreme settings, as well as 102 FPS average in Cyberpunk 2077 at Ultra. Impressively, this streak continued in Returnal with 121 FPS average at Epic settings, and Tiny Tina’s Wonderland at 116 FPS on Badass.
The RTX 4070 is no slouch either. The mid-range Ada also got an average of 60 FPS in Metro Exodus on the Extreme preset but just 65 FPS in Cyberpunk 2077 when set to Ultra. Scores are similar with Returnal and Tiny Tina’s Wonderland, with averages of 88 FPS and 97 FPS, falling just short of what the RX 7900 GRE was able to output. This is likely due to the smaller memory pool available.
Neither card flourishes in 4K, sadly, so if you’re considering either GPU you’ll want to keep things locked to QHD for the most part.
Looking at Metro: Exodus, you’re going to get around half the framerate with the GRE pulling in only 35 FPS; playable, but far from ideal. It’s a similar story with Cyberpunk 2077 with the RX 7900 GRE scoring 43 FPS on average. Returnal fared better with 68 FPS on average and Tiny Tina’s Wonderland keeping rock solid at 60 FPS. Playing at 2160p is possible with the RX 7900 GRE, but your results will be less consistent unless you tinker with the settings some.
The RTX 4070 falls well short of the RX 7900 GRE at 4K. You’re looking at quite the reduction in Metro: Exodus and Cyberpunk 2077 with an average of just 29 FPS apiece, falling just short of playability. Returnal does a bit better at 51 FPS but is well behind the 4K@60+ offered by the RX 7900 GRE. Tiny Tina’s Wonderland also lags behind with 50 FPS – a full 10 FPS deficit – or about 18% slower.
In the end, its not really all that close, with the RX 7900 GRE outperforming the RTX 4070 across all but a few benchmarks, and it only gets soundly beaten in Blender 4.0 3D rendering, a test that Nvidia has a natural advantage thanks to it being tied so heavily to Nvidia’s CUDA graphics language.
Other than that, though, the RX 7900 GRE scores nearly 27% better than the RTX 4070, which is incredible considering that they are both the same price. In Nvidia’s favor though, it’s taking a lot of power for the RX 7900 GRE to pull this off, so if efficiency and sustainability are important to you, the Nvidia RTX 4070 gets a lot of performance with much lower power.
Winner: AMD Radeon RX 7900 GRE
Which one should you buy?
(Image credit: Future / John Loeffler)
Which GPU you should buy will ultimately come down to your needs.
If you primarily want a mid-range graphics card for gaming first and foremost, then you’re going to be better served by the AMD RX 7900 GRE. AMD’s card performs better and more consistently in 1080p, 1440p, and even 4K.
However, if gaming is more of a secondary activity and you want a graphics card for creativity and productivity then there’s a case to be made for the RTX 4070 here. If you’re looking to get into 3D modeling on the (relative) cheap, Nvidia’s midrange card can get you a lot farther than AMD’s, but this is honestly a very narrow advantage, and most people are going to want to get the RX 7900 GRE. Considering there’s no difference in price between the two, it’s an easy call to make on this one.
According to Israeli startup NeuReality, many AI possibilities aren’t fully realized due to the cost and complexity of building and scaling AI systems.
Current solutions are not optimized for inference and rely on general-purpose CPUs, which were not designed for AI. Moreover, CPU-centric architectures necessitate multiple hardware components, resulting in underutilized Deep Learning Accelerators (DLAs) due to CPU bottlenecks.
NeuReality’s answer to this problem is the NR1AI Inference Solution, a combination of purpose-built software and a unique network addressable inference server-on-a-chip. NeuReality says this will deliver improved performance and scalability at a lower cost alongside reduced power consumption.
An express lane for large AI pipelines
“Our disruptive AI Inference technology is unbound by conventional CPUs, GPUs, and NICs,” said NeuReality’s CEO Moshe Tanach. “We didn’t try to just improve an already flawed system. Instead, we unpacked and redefined the ideal AI Inference system from top to bottom and end to end, to deliver breakthrough performance, cost savings, and energy efficiency.”
The key to NeuReality’s solution is a Network Addressable Processing Unit (NAPU), a new architecture design that leverages the power of DLAs. The NeuReality NR1, a network addressable inference Server-on-a-Chip, has an embedded Neural Network Engine and a NAPU.
This new architecture enables inference through hardware with AI-over-Fabric, an AI hypervisor, and AI-pipeline offload.
The company has two products that utilize its Server-on-a-Chip: the NR1-M AI Inference Module and the NR1-S AI Inference Appliance. The former is a Full-Height, Double-wide PCIe card that contains one NR1 NAPU system-on-a-chip and a network-addressable Inference Server that can connect to an external DLA. The latter is an AI-centric inference server containing NR1-M modules with the NR1 NAPU. NeuReality claims the server “lowers cost and power performance by up to 50X but doesn’t require IT to implement for end users.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“Investing in more and more DLAs, GPUs, LPUs, TPUs… won’t address your core issue of system inefficiency,” said Tanach. “It’s akin to installing a faster engine in your car to navigate through traffic congestion and dead ends – it simply won’t get you to your destination any faster. NeuReality, on the other hand, provides an express lane for large AI pipelines, seamlessly routing tasks to purpose-built AI devices and swiftly delivering responses to your customers, while conserving both resources and capital.”
NeuReality recently secured $20 million in funding from the European Innovation Council (EIC) Fund, Varana Capital, Cleveland Avenue, XT Hi-Tech and OurCrowd.
In a move to cut its dependency on Nvidia‘s high-cost AI chips, Naver, the South Korean equivalent of Google, has signed a 1 trillion won ($750 million) agreement with Samsung.
The deal will see the tech giant supply its more affordable Mach-1 chips to Naver, by the end of 2024.
The Mach-1 chip, currently under development, is an AI accelerator in the form of a SoC that combines Samsung’s proprietary processors and low-power DRAM chips to reduce the bottleneck between the GPU and HBM.
Just the start
The announcement of the Mach-1 was made during Samsung’s 55th regular shareholders’ meeting. Kye Hyun Kyung, CEO of Samsung Semiconductor, said the chip design had passed technological validation on FPGAs and that finalization of SoC was in progress.
The exact volume of Mach-1 chips to be supplied and prices are still under discussion, but The Korea Economic Daily reports that Samsung intends to price the Mach-1 AI chip at around $3,756 each. The order is expected to be for somewhere between 150,000 and 200,000 units.
Naver plans to use Samsung’s Mach-1 chips to power servers for its AI map service, Naver Place. According to The Korea Economic Daily, Naver will order further Mach-1 chips if the initial batch performs as well as hoped.
Samsung sees this deal with Naver as just the start. The tech giant is reportedly in supply talks with Microsoft and Meta Platforms who, like Naver, are actively seeking to reduce their reliance on Nvidia’s AI hardware.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
With the Naver deal, Samsung is also looking to better compete with its South Korean rival SK Hynix, which is the dominant player in the advanced HBM segment. Samsung has been heavily investing in HBM recently and at the start of March announced the industry’s first 12-stack HBM3E 12H DRAM. This reportedly outperforms Micron’s 24GB 8H HBM3E in terms of capacity and bandwidth and is expected to begin shipping in Q2 this year.
Samsung is reportedly planning to launch its own AI accelerator chip, the ‘Mach-1’, in a bid to challenge Nvidia‘s dominance in the AI semiconductor market.
The new chip, which will likely target edge applications with low power consumption requirements, will go into production by the end of this year and make its debut in early 2025, according to the Seoul Economic Daily.
The announcement was made during the company’s 55th regular shareholders’ meeting. Kye Hyun Kyung, CEO of Samsung Semiconductor, said the chip design had passed technological validation on FPGAs and that finalization of SoC was in progress.
Entirely new type of AGI semiconductor
The Mach-1 accelerator is designed to tackle AI inference tasks and will reportedly overcome the bottleneck issues that arise in existing AI accelerators when transferring data between the GPU and memory. This often results in slower data transmission speeds and reduced power efficiency.
The Mach-1 is reportedly a ‘lightweight’ AI chip, utilizing low-power (LP) memory instead of the costly HBM typically used in AI semiconductors.
The move is widely seen as Samsung’s attempt to regain its position as the world’s largest semiconductor company, fighting back against Nvidia which completely dominates the AI chip market and has seen its stock soar in recent months, making it the third most valuable company in the world behind Microsoft and Apple.
While the South Korean tech behemoth currently has no plans to challenge Nvidia’s H100, B100, and B200 AI powerhouses, Seoul Economic Daily reports that Samsung has established an AGI computing lab in Silicon Valley to expedite the development of AI semiconductors. Kyung stated that the specialized lab is “working to create an entirely new type of semiconductor designed to meet the processing requirements of future AGI systems.’
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Apple’s next-generation A18 Pro chip for iPhone 16 Pro models will feature a larger die size for boosted artificial intelligence performance, according to Jeff Pu, an investment analyst who covers companies within Apple’s supply chain.
In a research note with Hong Kong-based investment firm Haitong International Securities this week, Pu added that the A18 Pro chip will be equipped with a 6-core GPU, which would be equal to the A17 Pro chip in iPhone 15 Pro models.
Generative AI
iOS 18 is rumored to include new generative AI features for a range of iPhone features and apps, including Siri, Spotlight, Apple Music, Health, Messages, Numbers, Pages, Keynote, Shortcuts, and more. Apple has reportedly considered partnering with companies such as Google, OpenAI, and Baidu for at least some of these features.
iPhone 16 Pro models are rumored to feature an upgraded Neural Engine with “significantly” more cores, which could result in some of iOS 18’s generative AI features being exclusive to those models. Pu previously said the larger die size would be related to the Neural Engine, which could power on-device generative AI features.
Apple has used a 16-core Neural Engine since the iPhone 12 series. However, it has still improved the Neural Engine’s performance over the years, even when core counts have not changed. For example, Apple says the A17 Pro chip has up to a 2x faster Neural Engine compared to the one in the iPhone 14 Pro’s A16 Bionic chip.
Apple has promised to make generative AI announcements later this year. iOS 18 will be previewed at Apple’s developers conference WWDC in June, so we’re just a few months away from learning about the company’s plans.
6-Core GPU
With the A17 Pro chip, iPhone 15 Pro models have significantly improved graphics capabilities compared to previous models. Apple said the new 6-core GPU is up to 20% faster and also more power efficient than the 5-core GPU in the A16 Bionic chip. iPhone 15 Pro models also support hardware-accelerated ray tracing and mesh shading for improved graphics rendering, which results in more realistic graphics in games.
Pu believes Apple will stick with a 6-core GPU for the A18 Pro chip, so graphics improvements may be more limited for iPhone 16 Pro models this year.
Wrap Up
Apple is expected to announce the iPhone 16 series in September. For more details, read our iPhone 16 and iPhone 16 Pro roundups.