Two years after the debut of its Arc Alchemist GPUs, Intel is launching six new Arc products, but these are designed for edge/embedded systems.
These edge systems, which process data near the source to reduce latency and bandwidth use, are becoming increasingly essential in areas such as IoT, autonomous vehicles, and AI applications.
As Intel says, “AI at the edge is exploding with new use cases and workloads being developed daily. These AI workloads often require a high degree of parallel processing and memory bandwidth for peak performance, dedicated hardware, optimized architecture for compute efficiency, and reduced latency with faster results for real-time processing. A discrete GPU may be the ideal solution for edge AI use cases requiring high performance and complex model support.”
Six SKUs
The new Arc on edge GPUs are built on Intel’s highly scalable Intel Xe-core architecture and support AI acceleration, visual computing and media processing. Using the OpenVINO toolkit developers can deploy AI models across Intel hardware.
The Arc on edge offerings have a number of benefits, including reduced latency, improved bandwidth efficiency and better privacy and security.
For high performance and to handle heavy AI workloads and expansive use cases such as facial recognition and generative conversational speech, there’s the 7XXE. For immersive visual experiences and enhanced AI inferencing capabilities, there’s the 5XXE, and for low power and small form factor requirements, Intel has the 3XXE.
There are six SKUs available – the A310E and A3503 with 6 Xe-cores, the A370E and A380E with 8 cores, and the A580E and A750E with 28 cores. The A310E, A3503 and A370E have 4GB of GDDDR6 memory with 112GB/s memory bandwidth. The A380E has 6GB with 186GB/s bandwidth, while the A580E and A750E’s memory and memory bandwidth are unknown for now. Intel says only that it is “in planning”. There’s also no launch date for those two either, just TBD. The other four SKUs will be available this month.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Intel Arc GPUs are built to be paired with Intel Core processors, from 10th Gen upwards and Intel Xeon W-3400 and W-2400. A number of products featuring the new Arc GPUs are set to be released in the coming months from Intel partners including from ADLINK, Advantech, Asus, Matrox and Sparkle.
MSI has long been an under-the-radar producer of PCs and laptops, with as many hits as misses in its repertoire. As we enter the “AI laptop” age, MSI’s first volley in the new category lands squarely on the hit side, with its Prestige 13 AI Evo nailing an effective balance among price, performance, and portability.
As the name suggests, the Prestige 13 is an ultraportable 13.3-inch laptop, featuring a 2,880 X 1,800-pixel OLED display (no touchscreen). Inside is an entry-level Intel Core Ultra 5 125H CPU with 16 GB of RAM and a 512-GB SSD. Nothing fancy, but enough to get the job done. There’s also a version with the Core Ultra 7 with double the RAM and storage for not much more.
For those of you who haven’t been following the microchip world closely, Intel’s Core Ultra series features (among other innovations) a new neural processing unit designed specifically to improve artificial intelligence operations. The “Evo” designation is bestowed on devices by Intel for laptop designs that “pass rigorous testing around performance, battery life, connectivity, audio and visual quality, size, weight, and more.”
Photograph: MSI
With that preface, I’ll start where the laptop soars the highest: performance. The Prestige indeed lives up to its name on general apps and AI-related tests. MSI’s ultralight Windows machine ran rings around the performance of the more tricked-out Lenovo X1 Carbon, which features a faster Core Ultra processor. The MSI bested it on general app benchmarks by 3 to 47 percent, depending on the test, and the difference was noticeable in daily use, as the Prestige felt whip-crack fast to load apps, recalculate spreadsheets, and the like. The picture wasn’t as rosy in its graphics capabilities, as the lower-end CPU and lack of memory suppressed frame rates on video tasks considerably—although the Prestige did perform surprisingly well on photo rendering tests.
At 2.1 pounds and 18-mm thick, this laptop is about as portable as it gets in the 13.3-inch category, though more diminutive 13.0-inch units can be a few ounces lighter. Available in white or black, the magnesium-aluminum alloy chassis isn’t the sturdiest I’ve felt lately, but at the same time, it doesn’t come across as flimsy.
Get an AI-powered photo editor, filters and more for just $159.99. Image: StackSocial
Getting just the right shot is exciting, but any professional photographer will tell you that post-production work separates decent vacation snapshots from magazine-worthy images. That requires powerful photo editing software. And photo apps don’t come much more highly recommended than Luminar Neo, which lets you utilize AI to make your images look amazing.
Now there’s a great opportunity for new users to get this versatile software (for Mac or PC), along with hundreds of dollars worth of add-ons and training. You can get the Award-Winning Luminar Neo Lifetime Bundle for just $159.99 with code GET20. That’s a tremendous discount off the retail price of $752.)
Pick your lighting, paint the sky … and more
While the iPhone’s “computational photography mad science” usually captures great images. However, even the best photos typically require some tweaking if you really want them to shine. And you can make all the photo edits you want with lifetime access to the latest version of Skylum’s Luminar Neo. This impressive photo editing software won a Red Dot Award for interface design in 2022. And it’s a favorite on many tech and photography websites. The software lets users perform easy AI enhancements on photos, removing or changing backgrounds and touching up skin tones.
Newer tools include focus stacking, light source manipulation and more, And you’ll gain access to the latest updates as they arrive.
Plus, you get pro photo filters (and videos to teach you how to use everything)
The bundle also includes six professional-grade add-ons to Luminar Neo. They greatly expand the base library of overlays, filters and high-resolution backgrounds, ready to make any photo pop.
Don’t know how to make pro photo edits? You’re in luck. This all-inclusive bundle also comes with courses that will show you how to make the most of the software and extras. Novices and experienced visual artists alike will get a ton of use out of a 10-video tutorial in advanced editing techniques by pro photographer Albert Dros.
Save on the Award-Winning Luminar Neo Lifetime Bundle for Mac or PC
These add-ons cost anywhere from $19 to $269 individually. But with this bundle, you get them all — plus Luminar Neo itself — for a one-time payment of just $159.99. Just enter code GET20 at checkout to secure your savings.
Prices subject to change. All sales handled by StackSocial, our partner who runs Cult of Mac Deals. For customer support, please email StackSocial directly. We originally published this post on January 22, 2024. We updated the pricing info.
Horizon Forbidden Westhas come to PC, and it’s given me another reason not to buy a PS5. I’ve bought every generation of PlayStation console since the OG model, but with Sony‘s shift to (belatedly) porting most of its exclusives to PC, it just doesn’t seem worth splashing out on a new console when I can just wait for the games I want to play to come to me.
So, I was very happy to hear that Horizon Forbidden West was going to be ported to PC. As a big fan of the original game, which I played on PS4, I’d been looking forward to playing it.
Of course, as a visually-impressive first-party game from Sony, I was also keen to see how it performed on our 8K rig. As you can see in the specs box on the right, our rig has remained largely unchanged for over a year. This is because it remains a formidable machine – and, crucially, the Nvidia GeForce RTX 4090 graphics card that does the bulk of the work when gaming has yet to be beaten. It remains the best graphics card money can buy.
With rumors swirling that Sony is planning on releasing a more powerful PS5 Pro console in the near future that could target 8K resolutions through a mix of more powerful hardware and upscaling technology, Horizon Forbidden West at 8K on PC may give us an idea of the kind of visuals future PlayStation games may offer.
It also suggests what obstacles Sony will face if the PS5 Pro will indeed target 8K resolutions. Despite being almost two years old, the RTX 4090 GPU still costs more than its original launch price, hovering around $2,000/£2,000. While the PS5 Pro will likely be more expensive than the standard PS5, there’s no way it’ll be even half the price of Nvidia’s GPU – and that’s before you add in the cost of the other PC components required. Basically, you can’t currently buy an affordable 8K gaming machine that is priced for mainstream success. That’s the scale of the challenge Sony faces.
(Image credit: Future)
Spoilt for choice
One of the best things about Sony’s initiative to bring its games to PC, apart from giving me an excuse not to spend money I don’t have on a PS5, is that they usually come with an excellent choice of PC-centric options, including support for upscaling technology from Nvidia and support for ultrawide monitors.
Horizon Forbidden West continues this streak, and the PC port has been handled by Nixxes Software, which has handled many previous PlayStation to PC ports.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
This latest release is particularly noteworthy as not only does it support DLSS 3 for Nvidia RTX graphics, but it also supports competing upscaling tech in the form of AMD FSR 2.2 and Intel XeSS.
All three of these features allow the game to run at a lower resolution, with the images upscaled so that the game appears at a higher resolution, but without the additional strain on your PC’s graphics card.
This mainly allows less powerful GPUs to hit resolutions with graphical effects enabled that they usually wouldn’t be able to handle. It also allows the mighty RTX 4090 to reach the demanding 8K resolution (7680 × 4320) in certain games while maintaining a playable framerate.
By supporting the three major upscaling tools, Horizon Forbidden West gives users much more choice (both FSR and XeSS work for a range of GPUs, while DLSS is exclusive to recent Nvidia GPUs) – and it also gives me a chance to see which upscaling tech performs the best.
(Image credit: Sony)
First up: DLSS
First, I played Horizon Forbidden West at the 8K resolution of 7680 × 4320 and the graphics preset at ‘Very High’ – which is the highest quality on offer. With DLSS turned off (so the game is running at native 8K), my 8K test rig managed to run Horizon Forbidden West at an average of 32 frames per second (fps).
Considering that this is a graphically-intensive game and running at the highest graphics and at a resolution that’s pushing around 33 million pixels, this is very impressive, and is a testament to the raw power of the RTX 4090, the rest of the components inside the rig built by Stormforce Gaming, and the talents of Guerrilla Games (developers of the game) and Nixxes Software.
I feel that 30fps is the minimum frame rate for a playable game, so if you wanted to play Horizon Forbidden West at a native 8K resolution, that’s certainly possible. If you drop the graphics preset, then the frame rate will go up – though at the cost of graphical fidelity.
Of course, you don’t spend around $2,000 on a GPU to get 32fps in a game, so I turned on DLSS and set it to ‘Quality’, which minimizes the amount of upscaling performed to preserve image quality as much as possible. This led the average framerate to jump to 45fps, with a maximum frames per second of 60.7fps.
One thing to note with my results, which you can view in the chart above, is that because Horizon Forbidden West doesn’t have a built-in benchmark tool, I had to play the same section over and over again, using MSI Afterburner to record my framerate. I chose a section of the game with large open spaces, water effects and a combat encounter, and I tried to make each playthrough, lasting around eight minutes, as similar as possible. However, my playthroughs weren’t identical, as some things, such as enemy attacks, would change, and this explains why there are some discrepancies between results. Still, it should give you a good idea of the difference each setting makes.
Next, I turned ‘Frame Generation’ on. This is a new feature exclusive to DLSS 3 and Nvidia’s RTX 4000 series of cards. It uses AI to generate and insert frames between normal frames rendered by the GPU. The goal is to make games feel even smoother with higher, more consistent framerates while maintaining image quality.
As the chart shows, this gave the game another bump in frames per second. I then tested the other DLSS settings with Frame Generation left on.
With DLSS set to Ultra Performance, I hit 59.3fps at 8K – basically the 60fps goal I aim for in these tests, which offers a balance of image quality and performance. With Ultra Performance, the RTX 4090 is rendering the game at a much lower resolution, then using DLSS to upscale to 8K, and this reliance on upscaling can lead to an image quality that can suffer from a lack of sharpness and detail, and graphical artifacts. The good news is that DLSS 3 is a big improvement over previous versions, and the hit to graphic quality is far less noticeable these days.
So, thanks to DLSS, you can indeed play Horizon Forbidden West at 8K. But how does AMD and Intel’s rival technologies cope?
(Image credit: Sony Interactive Entertainment)
AMD FSR 2.2 tested
AMD’s FSR 2.2 technology isn’t as mature as Nvidia’s DLSS 3, but it has a noteworthy feature that DLSS lacks: it’s open source and doesn’t just work with AMD graphics cards – Nvidia and Intel GPUs can make use of it as well.
This makes it far more accessible than DLSS, which is exclusive to new and expensive Nvidia GPUs, and for many people this flexibility makes up for any shortfall in performance.
As you can see from my results above, FSR 2.2 provides a decent jump in frame rates compared to running Horizon Forbidden West natively at 8K, though at each quality setting, it doesn’t quite keep up with DLSS 3’s results.
The best results I managed was with FSR set to ‘Ultra Performance’, where it hit 55.2fps on average. Below DLSS 3’s best results, but certainly not bad, and close to doubling the performance of the game compared with playing it natively.
As well as being unable to hit the same highs as DLSS 3, AMD FSR 2.2’s image quality at Ultra Performance isn’t quite as good as DLSS 3 at similar settings, with a few instances of shimmering and ghosting becoming noticeable during my playthrough.
(Image credit: Sony)
Intel XeSS results
Finally, I tested out Intel’s XeSS technology. While there is a version of XeSS designed to run with Intel Arc graphics cards, as with FSR you can use XeSS with various GPU brands, so there is yet another upscaling tool that gamers can try out. As with most things, the more choice there is for consumers, the better.
XeSS hasn’t been around for as long as DLSS or FSR, and as you can see from the results above, it wasn’t able to match either of Nvidia or AMD’s solutions. There’s no ‘Ultra Performance’ mode either, so XeSS hits its highest framerates with XeSS set to ‘Performance’, with an average of 50.6fps. This leads to a perfectly playable experience at 8K, but it’s noticeably more sluggish than when playing with DLSS at Ultra Performance.
However, it still gives you a decent fps bump over native 8K, and with Intel being one of the biggest proponents of artificial intelligence, I’m pretty confident that XeSS performance will improve as the technology matures. The fact that you can use it with GPUs from Intel’s rivals is also a big plus.
(Image credit: Sony)
Conclusion: DLSS for the win (again)
Once again, DLSS 3 has proved to be the best way of getting a game to run at 8K and 60fps with minimal compromises.
Not only did it allow the RTX 4090 to hit 59.3fps on average while playing Horizon Forbidden West, but it also looked the best with minimal impact to image quality.
This may not come as too much of a surprise – DLSS has been around for quite a while now, and Nvidia has been putting a lot of work into improving the technology with each release.
Also, while Nvidia’s preference for proprietary tech means you need the latest RTX 4000 series of GPUs to get the most out of it, this does at least mean Team Green can make use of exclusive features of its GPUs such as Tensor Cores. With AMD and Intel’s more open implementations, they are unable to target specific hardware as easily – though FSR and XeSS are available to a much wider range of PC gamers.
And, while FSR doesn’t quite match DLSS performance with Horizon Forbidden West, it comes close, and if you don’t have an Nvidia GPU, this is a fine alternative. As for XeSS, it shows plenty of promise.
So, upscaling tech has made gaming at 8K on PC achievable, and it’s great to see increased choices for users. So, if Sony is indeed working on a PS5 Pro that aims to run games like Horizon Forbidden West at 8K, it’s going to have to come up with its own upscaling tech (or adapt FSR or XeSS) if it wants to compete.
This past Monday, about a dozen engineers and executives at data science and AI company Databricks gathered in conference rooms connected via Zoom to learn if they had succeeded in building a top artificial intelligence language model. The team had spent months, and about $10 million, training DBRX, a large language model similar in design to the one behind OpenAI’s ChatGPT. But they wouldn’t know how powerful their creation was until results came back from the final tests of its abilities.
“We’ve surpassed everything,” Jonathan Frankle, chief neural network architect at Databricks and leader of the team that built DBRX, eventually told the team, which responded with whoops, cheers, and applause emojis. Frankle usually steers clear of caffeine but was taking sips of iced latte after pulling an all-nighter to write up the results.
Databricks will release DBRX under an open source license, allowing others to build on top of its work. Frankle shared data showing that across about a dozen or so benchmarks measuring the AI model’s ability to answer general knowledge questions, perform reading comprehension, solve vexing logical puzzles, and generate high-quality code, DBRX was better than every other open source model available.
AI decision makers: Jonathan Frankle, Naveen Rao, Ali Ghodsi, and Hanlin Tang.Photograph: Gabriela Hasbun
It outshined Meta’s Llama 2 and Mistral’s Mixtral, two of the most popular open source AI models available today. “Yes!” shouted Ali Ghodsi, CEO of Databricks, when the scores appeared. “Wait, did we beat Elon’s thing?” Frankle replied that they had indeed surpassed the Grok AI model recently open-sourced by Musk’s xAI, adding, “I will consider it a success if we get a mean tweet from him.”
To the team’s surprise, on several scores DBRX was also shockingly close to GPT-4, OpenAI’s closed model that powers ChatGPT and is widely considered the pinnacle of machine intelligence. “We’ve set a new state of the art for open source LLMs,” Frankle said with a super-sized grin.
Building Blocks
By open-sourcing, DBRX Databricks is adding further momentum to a movement that is challenging the secretive approach of the most prominent companies in the current generative AI boom. OpenAI and Google keep the code for their GPT-4 and Gemini large language models closely held, but some rivals, notably Meta, have released their models for others to use, arguing that it will spur innovation by putting the technology in the hands of more researchers, entrepreneurs, startups, and established businesses.
Databricks says it also wants to open up about the work involved in creating its open source model, something that Meta has not done for some key details about the creation of its Llama 2 model. The company will release a blog post detailing the work involved to create the model, and also invited WIRED to spend time with Databricks engineers as they made key decisions during the final stages of the multimillion-dollar process of training DBRX. That provided a glimpse of how complex and challenging it is to build a leading AI model—but also how recent innovations in the field promise to bring down costs. That, combined with the availability of open source models like DBRX, suggests that AI development isn’t about to slow down any time soon.
Ali Farhadi, CEO of the Allen Institute for AI, says greater transparency around the building and training of AI models is badly needed. The field has become increasingly secretive in recent years as companies have sought an edge over competitors. Opacity is especially important when there is concern about the risks that advanced AI models could pose, he says. “I’m very happy to see any effort in openness,” Farhadi says. “I do believe a significant portion of the market will move towards open models. We need more of this.”
A man with a balloon for a head is somehow not the weirdest thing you’ll see today thanks to a series of experimental video clips made by seven artists using OpenAI’s Sora generative video creation platform.
Unlike OpenAI‘s ChatGPT AI chatbot and the DALL-E image generation platform, the company’s text-to-video tool still isn’t publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.
While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI’s Artist In Residence Alex Reben’s 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller’s video depicts models melded with what looks like translucent stained glass.
Not all the videos are so esoteric.
OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)
If we had to give out an award for most entertaining, it might be multimedia production company shy kids’ “Air Head”. It’s an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.
Sora’s ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids’ Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it’s a funny and extremely surreal little movie.
But wait, it gets stranger.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III’s “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.
OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man’s balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?
That OpenAI has invited creatives to take Sora for a test run is not surprising. It’s their livelihoods in art, film, and animation that are most at risk from Sora’s already impressive capabilities. Most seem convinced it’s a tool that can help them more quickly develop finished commercial products.
“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It’s enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.
Go watch the clips but don’t blame us if you wake up in the middle of the night screaming.
Dyson’s latest high-end robot vacuum, the Dyson 360 Vis Nav, has reached the shores of North America after spending more than eight months lingering in the UK and Australia.
You can think of it as the successor to the old Dyson 360 Eye model from all the way back in 2017. At the center of the Vis Nav is a camera with a fisheye lens capable of seeing in 360 degrees, giving the machine a full panoramic view of your home. All of the visual information it takes in is then processed through its on-board system, creating a map. It’ll know the layout of your house, including where the furniture is located, and even make note of areas where dust frequently accumulates.
Of course, you do have the option to make adjustments on the fly via the MyDyson app. With a mobile device, owners can create cleaning schedules for the robot vacuum and even instruct it to avoid certain areas of their home. The Vis Nav is compatible with both Google Home and Alexa, allowing you to control it with simple voice commands.
Notable features
One of the things the company boasts about for this robo-vacuum is its power. The model houses a motor capable of spinning at 11,000rpm, delivering a suction of 65 air watts (AW). If you check out our review of the Vis Nav from last year, you’ll learn it was able to completely fill up its trash bin while in the default Auto mode.
There are four separate modes in total, all of which can be activated through the aforementioned mobile app. Auto, as the name suggests, is your standard “set it and forget it” setting. The device will clean your floors without direct input. On the side is an extending nozzle that can reach those tough spots up against the wall. As it’s sweeping, the Vis Nav can dynamically adjust its suction power to the side duct so it can clean deep into those missed spots.
(Image credit: Dyson)
Next is Boost mode to greatly increase the vacuum’s performance when scrubbing your house. It knows when to magnify power thanks to the internal “piezo sensor”. This piece of hardware informs the model how much dust is on the floor currently. Third is Quiet mode, which is supposed to silence the Vis Nav. However, if you read our review, it’s really not that quiet. The last one is Quick mode. According to The Verge, this mode has the vacuum clean up open areas while avoiding the dirt stuck inside corners.
Availability
As much as we harp about the review, it’s important to go back to it because it seems Dyson made very few changes. The Vis Nav over here in the Western Hemisphere will have all the positives as well as all the negatives.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
It appears Dyson improved the battery on the North American release as it can now last up to 65 minutes on a single charge instead of 50 minutes. Recharging will take 1.55 hours instead of 2.5 hours. The one thing that hasn’t changed is how expensive it is.
The 360 Vis Nav is on sale right now in the United States and Canada for $1,200 USD/$1,500 CAD. So yeah, he’s a pricey fella. Fortunately, there are other options out there.
If you’re shopping for a premium gaming handheld with specs that beat out both the Nintendo Switch and Steam Deck, then you should check out this excellent Best Buy discount on the ultra-powerful Asus ROG Ally Z1 Extreme.
Right now, you can purchase the Asus ROG Ally Z1 Extreme for $100 less than its retail price at Best Buy. The US retailer currently has it listed at only $599.99 (was $699.99). It’s not the first time this model has been discounted so; as we saw the same price drop applied during last year’s Black Friday sales event. If you’re looking to spend even less, the standard Asus ROG Ally Z1 is also discounted right now, down to just $399.99 (was $599.99) which is a $200 saving.
The Asus ROG Ally Z1 Extreme is essentially an improved version of the already powerful handheld gaming device. It can display resolutions of up to 1080p and even has support for 120Hz refresh rates. The Z1 Extreme’s beefier CPU offers much-improved performance, visual fidelity and load times. So know that you’re getting a superior experience with the pricier model.
NVIDIA’s H100 chips are used by nearly every AI company in the world to train large language models hooked into services like ChatGPT. It’s been great for business. Now, the company is ready to make those chips look terrible, announcing a next-generation platform called Blackwell.
Named for David Harold Blackwell, a mathematician who specialized in game theory and statistics, NVIDIA claims Blackwell is the world’s most powerful chip, reaching speeds of 20 petaflops compared to just 4 petaflops the H100 provided. Yeah, throw it in the trash. You need new chips.
And if you didn’t know how powerful NVIDIA is, its press release for this new platform includes quotes from the CEOs of OpenAI, Microsoft, Alphabet, Meta and Tesla — yes, all CEOs you probably know the names of.
— Mat Smith
The biggest stories you might have missed
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The tournament is postponed until further notice.
Respawn
Yeah, this is bad. Respawn, the EA-owned studio behind Apex Legends, has postponed the North American Finals tournament after hackers broke into matches and equipped players with cheats. Footage of the hacks on Twitch show players being able to see their opponent’s location through walls, while notable player (and one of the best) ImperialHal was gifted an aimbot to hit enemies more easily. Respawn says it would share more information soon, but as of time of writing, the studio hasn’t elaborated.
The Mevo Core has improved built-in mics and works with any MFT lens.
Logitech is expanding its Mevo lineup of livestreaming cameras. The company’s new Mevo Core shoots in 4K, a big upgrade from the 1080p Mevo Start camera kit I tested a few years back. However, the trade-off is pricing as the new model will set you back three times as much for a three-camera setup. $999. So yes, this is probably for the pro streamers.
To emphasize that, the Core ships as a body only, but Logitech will sell lens bundle kits through Amazon and B&H Photo Video. You will need to buy an additional lens just to make it work. And it’s only compatible with — so there’s a high chance you’ll have to buy one.
It’s like Google search on Safari all over again. Plus 15 years.
Apple is reportedly in talks with Google to integrate its Gemini AI in iPhones, according to Bloomberg. Gemini could be the cloud-based generative AI engine for Siri and other iPhone apps, while Apple’s models could be woven into the upcoming iOS 18 for on-device AI tasks.
There are regulatory concerns to consider—the Department of Justice has already sued Google over its search dominance, including the way it pays Apple and other companies to use its search engine. But given how Microsoft and OpenAI’s partnership turned the Bing search engine into something people were actually talking about, the team-up might be worth the risk.
The next generation of AI will be powered by Nvidia hardware, the company has declared after it revealed its next generation of GPUs.
Company CEO Jensen Huang took the wraps off the new Blackwell chips at Nvidia GTC 2024 today, promising a major step forward in terms of AI power and efficiency.
The first Blackwell “superchip”, the GB200, is set to ship later this year, with the ability to scale up from a single rack all the way to an entire data center, as Nvidia looks to push on with its leadership in the AI race.
Nvidia Blackwell
Representing a significant step forward for the company’s hardware from its predecessor, Hopper, Huang noted that Blackwell contains 208 billion transistors (up from 80 billion in Hopper) across its two GPU dies, which are connected by 10 TB/second chip-to-chip link into a single, unified GPU.
This makes Blackwell up to 30x faster than Hopper when it comes to AI inference tasks, offering up to 20 petaflops of FP4 power, far ahead of anything else on the market today.
(Image credit: Future / Mike Moore)
During his keynote, Huang highlighted not only the huge jump in power between Blackwell and Hopper – but also the major difference in size.
“Blackwell’s not a chip, it’s the name of a platform,” Huang said. “Hopper is fantastic, but we need bigger GPUs.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Despite this, Nvidia says Blackwell can reduce cost and energy consumption by up to 25x, giving the example of training a 1.8 trillion parameter model – which would previously have taken 8,000 Hopper GPUs and 15 megawatts of power – but can now be done by just 2,000 Blackwell GPUs consuming just four megawatts.
The new GB200 brings together two Nvidia B200 Tensor Core GPUs and a Grace CPU to create what the company simply calls, “a massive superchip” able to drive forward AI development, providing 7x the performance and four times the training speed of an H10O-powered system.
The company also revealed a next-gen NVLink network switch chip with 50 billion transistors, which will mean 576 GPUs are able to talk to each other, creating 1.8 terabytes per second of bidirectional bandwidth.
Nvidia has already signed up a host of major partners to build Blackwell-powered systems, with AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure already on board alongside a host of big industry names.