Qualcomm has announced two new audio chips for wireless earphones and headphones: S3 Gen 3 and S5 Gen 3. Both these audio chips claim to make more affordable wireless earphones sound better through advanced connectivity features and AI-powered audio technologies.
Qualcomm S3 Gen 3 and S5 audio chips bring Bluetooth 5.4 LE and Auracast support
Qualcomm’s S3 Gen 3 and S5 Gen 3 chips bring audiophile-grade audio quality, high-quality audio codecs, more effective Active Noise Cancellation (ANC), improved call quality, and longer battery life to wireless earbuds and headphones. These audio chips sit below Qualcomm’s flagship audio chips: S7 and S7 Pro.
Qualcomm S3 Gen 3
The S3 Gen 3 is an audio chip for mid-tier earbuds, mid-tier headphones, and wireless speakers. It has all the essential features, including support for ANC, aptX Adaptive, aptX Lossless (24-bit 48kHz), Bluetooth 5.4 with LE Audio, Bluetooth Auracast, and Spatial Audio. It also features an improved Digital-To-Analog (DAC) converter for improved audio quality and lower noise (hissing sound at low volumes).
It supports Alexa and Google Assistant with wake word activation. It also supports Google Fast Pair for faster pairing. Thanks to the support of up to three microphones on each earbud (left and right), voice calls should be clearer.
Qualcomm S5 Gen 3
The S5 Gen 3 has more features than the S3 Gen 3. It is for premium earbuds/headphones (one step below flagship) and wireless speakers. Qualcomm says this new chip has 50% more memory and significantly more DSP processing power than the S5 Gen 2, bringing improved audio quality while gaming and listening to music. It also features improved ANC and echo cancellation.
This chip supports Bluetooth 5.4 with LE Audio, Bluetooth Auracast, and Spatial Audio. It features aptX Adaptive and aptX Lossless (24-bit 48kHz) audio codecs for higher audio quality. Adaptive ANC and Adaptive Transparency features are also supported.
Watch our review of the Galaxy Buds 2 Pro in the video below.
The S3 Gen 3 and S5 Gen 3 audio chips will soon be available to earphone makers, and we can expect earbuds featuring these chips to hit the market later this year or early next year. Samsung uses Qualcomm’s chips in some of its wireless earbuds, but there is no confirmation if it will use the S3 Gen 3 or S5 Gen 3 in its future Galaxy Buds.
MediaTek has announced that it has partnered with Nvidia to make four new automotive chips for connected and self-driving cards. The company’s Dimensity Auto Cockpit series chips will compete with automotive chips from Qualcomm and Samsung.
MediaTek Dimensity Auto Cockpit chips to compete with Samsung’s Exynos Auto chips
The company’s new Dimensity Auto Cockpot series has four chips: CM-1, CV-1, CX-1, and CY-1. These 3nm chips use ARM’s V9-A CPU cores and Nvidia’s RTX GPU. They support Nvidia’s DRIVE OS and will power the infotainment system and other intelligence inside connected cars. Nvidia’s technology powers AI processing on these chips, and it will be used for entertainment, navigation, and general information needs.
The GPU can drive four high-resolution screens. In fact, MediaTek and Nvidia claim that the new chips can even support in-vehicle gaming, complete with ray-traced graphics. Media streaming and audio/video calls with AI-enhanced clarity are also supported. They can even monitor people inside the car for natural controls and gaze-aware UI.
The CV-1 is a chip for entry-level cars. The CM-1 is for mid-range cars and the CV-1 is for high-end cars. The CX-1 is for premium car offerings. These chips can simultaneously run multiple operating systems, including Android Auto, Linux, and QNX, in virtualization.
The new chips also support up to ten cameras and other sensors for assisted driving. The built-in ISP can process HDR scenes in real-time. Their connectivity features include a built-in 5G modem (sub-6GHz), NTN (for direct satellite connectivity for emergency calls and messages), Wi-Fi 7, GNSS, and Bluetooth.
NVIDIA’s H100 chips are used by nearly every AI company in the world to train large language models hooked into services like ChatGPT. It’s been great for business. Now, the company is ready to make those chips look terrible, announcing a next-generation platform called Blackwell.
Named for David Harold Blackwell, a mathematician who specialized in game theory and statistics, NVIDIA claims Blackwell is the world’s most powerful chip, reaching speeds of 20 petaflops compared to just 4 petaflops the H100 provided. Yeah, throw it in the trash. You need new chips.
And if you didn’t know how powerful NVIDIA is, its press release for this new platform includes quotes from the CEOs of OpenAI, Microsoft, Alphabet, Meta and Tesla — yes, all CEOs you probably know the names of.
— Mat Smith
The biggest stories you might have missed
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The tournament is postponed until further notice.
Respawn
Yeah, this is bad. Respawn, the EA-owned studio behind Apex Legends, has postponed the North American Finals tournament after hackers broke into matches and equipped players with cheats. Footage of the hacks on Twitch show players being able to see their opponent’s location through walls, while notable player (and one of the best) ImperialHal was gifted an aimbot to hit enemies more easily. Respawn says it would share more information soon, but as of time of writing, the studio hasn’t elaborated.
The Mevo Core has improved built-in mics and works with any MFT lens.
Logitech is expanding its Mevo lineup of livestreaming cameras. The company’s new Mevo Core shoots in 4K, a big upgrade from the 1080p Mevo Start camera kit I tested a few years back. However, the trade-off is pricing as the new model will set you back three times as much for a three-camera setup. $999. So yes, this is probably for the pro streamers.
To emphasize that, the Core ships as a body only, but Logitech will sell lens bundle kits through Amazon and B&H Photo Video. You will need to buy an additional lens just to make it work. And it’s only compatible with — so there’s a high chance you’ll have to buy one.
It’s like Google search on Safari all over again. Plus 15 years.
Apple is reportedly in talks with Google to integrate its Gemini AI in iPhones, according to Bloomberg. Gemini could be the cloud-based generative AI engine for Siri and other iPhone apps, while Apple’s models could be woven into the upcoming iOS 18 for on-device AI tasks.
There are regulatory concerns to consider—the Department of Justice has already sued Google over its search dominance, including the way it pays Apple and other companies to use its search engine. But given how Microsoft and OpenAI’s partnership turned the Bing search engine into something people were actually talking about, the team-up might be worth the risk.
Toronto-based AI chip startup Taalas has emerged from stealth with $50 million in funding and the lofty aim of revolutionizing the GPU-centric world dominated by Nvidia.
Founded by Ljubisa Bajic, Lejla Bajic, and Drago Ignjatovic, all previously from Tenstorrent (the creator of Grayskull), Taalas is developing an automated flow for quickly turning any AI model – Transformers, SSMs, Diffusers, MoEs, etc. – into custom silicon. The company claims that the resulting Hardcore Models are 1000x more efficient than their software counterparts.
The startup also says that one of its chips can hold an entire large AI model without requiring external memory, and the efficiency of hard-wired computation enables a single chip to outperform a small GPU data center.
Casting intelligence directly into silicon
“Artificial intelligence is like electrical power – an essential good that will need to be made available to all. Commoditizing AI requires a 1000x improvement in computational power and efficiency, a goal that is unattainable via the current incremental approaches. The path forward is to realize that we should not be simulating intelligence on general purpose computers, but casting intelligence directly into silicon. Implementing deep learning models in silicon is the straightest path to sustainable AI,” said Ljubisa Bajic, Taalas’ CEO.
“We believe the Taalas ‘direct-to-silicon’ foundry unlocks three fundamental breakthroughs: dramatically resetting the cost structure of AI today, viably enabling the next 10-100x growth in model size, and efficiently running powerful models locally on any consumer device. This is perhaps the most important mission in computing today for the future scalability of AI. And we are proud to support this remarkable n-of-1 team as they do it,” said Matt Humphrey, Partner at Quiet Capital which led the two rounds of funding alongside Pierre Lamond, an advisor at Eclipse Ventures.
Taalas says it will be taping out its first large language model chip in the third quarter of 2024, and aiming to make its chips available to the first customers in Q1 2025.
South Korean chipmaker SK Hynix, a key Nvidia supplier, says it has already sold out of its entire 2024 production of stacked high-bandwidth memory DRAMs, crucial for AI processors in data centers. That’s a problem, given just how in demand HBM chips are right now.
However, a solution might have presented itself, as reports say SK Hynix is in talks with Japanese firm Kioxia Holdings to jointly produce HBM chips.
SK Hynix, the world’s second-largest memory chipmaker, is a major shareholder of Kioxia, the world’s No. 2 NAND flash manufacturer, and if the deal goes ahead, it could see the high-performance chips being produced at facilities co-operated by Kioxia and US-based Western Digital Corp. in Japan.
A merger hangs in the balance
What makes this situation even more interesting is Kioxia and Western Digital have been engaged in merger talks, something that SK Hynix is opposed to over fears this would reduce its opportunities with Kioxia. As SK Hynix is a major shareholder in Kioxia, the Japanese firm and Western Digital’s merger can’t go ahead without its blessing, and this new move could be seen as an important sweetener to help things progress.
After the news of the potential deal broke, SK Hynix issued a statement saying simply, “There is no change in our stance that if there is a collaboration opportunity, we will enter into discussion on that matter.”
If the merger between Kioxia and Western Digital does proceed, it will make the company potentially the biggest manufacturer of NAND memory on the planet, leapfrogging current leader Samsung, so there’s a lot to play for.
The Korea Economic Daily says of the move, “The partnership is expected to cement SK Hynix’s leadership in the HBM segment, which it almost evenly splits with Samsung Electronics. Kioxia will likely transform NAND flash lines into those for HBMs, used for generative AI applications, high-performance data centers and machine learning platforms, contributing to a revival in Japan’s semiconductor market.”
As with many other businesses Intel envisions a future where artificial intelligence (AI) is a critical part of every technology we use. To make this a reality Intel, a giant in the semiconductor industry, has laid out a detailed plan to take the lead in the AI chip market. This move is set to transform the way chips are made and used across the globe. ChatGPT creators OpenAI are also considering designing and manufacturing its own AI chips.
You might be wondering why Intel is focusing so much on AI. The answer is simple: AI is everywhere. From smartphones to cars, and from healthcare to finance, AI is becoming an essential tool. To meet this growing demand, Intel is pouring resources into creating more powerful and energy-efficient AI chips. They’re expanding their data centers and making sure their operations can withstand any global disruptions.
Intel AI Chips
Intel hasn’t forgotten its roots, though. They’re still dedicated to Moore’s Law, the idea that the number of transistors on a microchip should double every two years. This principle is driving their advancements in process technology, with exciting developments like Intel 7, Intel 4, and Intel 3, and the upcoming transition to Intel 20a and Intel 18a.
Here are some other articles you may find of interest on the subject of AI chips :
The future of transistors
But it’s not just about adding more transistors. Intel is rethinking the very structure of transistors and how power is delivered to them. These innovations are crucial for AI applications, which need a lot of computational power without using too much energy.
Intel is also focusing on how to put all these chips together. They’re improving their assembly, testing, and packaging capabilities to meet the unique needs of AI chip manufacturing. Their vision is to create a “systems foundry” that can integrate different components into a seamless system, making it easier to train and execute AI models. Collaboration is another key part of Intel’s strategy. They’re working with companies like Microsoft to produce next-generation chips using Intel’s advanced processes. These partnerships are essential for overcoming the challenges of developing AI systems.
“At Intel Foundry Direct Connect, Intel launched Intel Foundry as the world’s first systems foundry for the AI era, delivering leadership in technology, resiliency and sustainability. Intel CEO Pat Gelsinger and Stuart Pann, senior vice president and general manager of Intel Foundry, delivered the morning keynote session. They were joined by thought leaders from industry and government. Gina Raimondo, U.S. Secretary of Commerce, and Satya Nadella, Microsoft chairman and CEO, made remote appearances during the session.”
Intel knows that to truly succeed, they need to work with the rest of the industry. They’re pushing for industry-wide standards and open collaboration to make sure AI systems are compatible and can work together across different platforms and devices. One of their most important partnerships is with Arm. Together, they’re working to improve design capabilities and education, providing intellectual property and shuttle services at scale. This partnership is a strategic move to strengthen Intel’s position in the AI chip market.
Intel’s comprehensive approach to AI chip manufacturing is setting them up to be a leader in the AI-driven future. With their focus on technological investment, strategic partnerships, and industry collaboration, Intel is poised to drive innovation and meet the growing demands of AI. This is a pivotal moment for Intel and the tech industry as a whole, as they work to shape the future of artificial intelligence and its role in our lives.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
As we look ahead to 2024, the landscape of technology is poised for significant change, particularly in the realm of edge AI. EdgeCortix, a leading semiconductor company based in Japan, is at the forefront of these developments. They are preparing to introduce new AI chips and hybrid edge-cloud architectures that are expected to make our interactions with devices faster, more efficient, and more responsive.
Edge AI chips
These new AI chips are designed with energy efficiency in mind, allowing devices to handle complex tasks with less reliance on cloud services. This not only saves energy but also ensures that your devices can operate more independently. The hybrid edge-cloud architectures that are being developed will provide quicker response times and reduced latency, without sacrificing the ability to handle large amounts of data.
Here are some other articles you may find of interest on the subject of artificial intelligence :
Hybrid edge-cloud architectures
Software is also set to take center stage in this technological evolution. It will be crucial in bringing together the various components of the edge technology ecosystem. Devices will become smarter and more adaptable, thanks to AI-powered software. Generative AI applications, which can learn and adapt to user preferences, are expected to offer a more personalized experience for both consumers and employees.
The impact of edge AI is not limited to the tech industry. It is set to expand into other sectors, such as robotics, healthcare, and security, and even into areas like fashion and media. This demonstrates the versatility of edge AI and its potential to transform many aspects of our professional and personal lives.
EdgeCortix is not only focused on technological advancements but also on sustainability. The company is committed to developing edge AI technologies that are environmentally friendly, addressing the ecological concerns associated with tech growth. Strategic partnerships are also a priority for EdgeCortix, as they seek to improve the integration of software and hardware, optimizing the performance of the devices we use daily.
Operating without its own manufacturing facilities, EdgeCortix takes a software-centric approach to its business. This strategy allows the company to concentrate on creating energy-efficient AI processing solutions. Their focus on software enables them to serve a global market across various industries with robust and eco-conscious solutions.
The year 2024 is set to be a milestone year for edge AI technology. With the introduction of next-generation AI chips, innovative hybrid models, and advanced software solutions, along with the expansion of generative AI applications and a dedication to environmentally friendly practices, EdgeCortix is leading the charge into a new era of edge computing. These advancements are expected to significantly alter our technological experiences in day-to-day life, making devices smarter and more responsive while also being kinder to our planet. Keep an eye on these developments, as they promise to reshape the way we interact with technology in the very near future.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Sam Altman, a leading figure in the tech industry and head of OpenAI, is spearheading an ambitious project to raise funds for the development and worldwide production of advanced AI chips. These chips are designed to operate similarly to the human brain, which could lead to significant improvements in the efficiency and cost-effectiveness of AI computations.
The rapid advancement of AI technology has led to an increase in computational demands that current hardware is struggling to meet. Neuromorphic chips could be the solution to this problem, offering a more natural and efficient way to handle AI tasks than traditional processors. The success of these chips could have a profound impact on the future of AI, making Altman’s fundraising efforts crucial.
However, Altman’s initiative has not been without controversy. Some have questioned his decision to seek funding outside of OpenAI, while others have misinterpreted his actions as being at odds with the organization’s goals. One of the first attempts to secure funding involved investors from the Middle East, which raised concerns and prompted the U.S. government to intervene, citing national security interests. This has highlighted the importance of where chip manufacturing takes place and the need for domestic production to maintain hardware sovereignty and avoid risks associated with blacklisted entities.
Sam Altman investment to make new “Brain Chips” for AI
Learn more about the new development and start-up backed by CEO Sam Altman that OpenAI has already agreed to buy $51 million worth of AI chips from.
The search for financial backers and partners to build chip fabrication facilities is ongoing. OpenAI and other major tech companies need to secure investment to maintain their lead in the AI race. Neuromorphic chips have the potential to revolutionize AI, but realizing their full capabilities will require collaboration across various sectors.
As Altman continues to push for the global production of these advanced AI chips, the implications for U.S. national security and the global AI infrastructure will be closely watched. The success of this initiative could mark a significant moment for investors, policymakers, and the AI community as a whole.
What are Neuromorphic chips?
Neuromorphic chips are a type of hardware designed to mimic the neural structure and functioning of the human brain. This approach to chip design is fundamentally different from traditional computing paradigms. Traditional computers use the von Neumann architecture, which separates memory and processing units, leading to a bottleneck in data transfer. Neuromorphic chips, on the other hand, integrate memory and processing, similar to how neurons in the brain function.
The core concept behind neuromorphic computing is to emulate the brain’s massively parallel computational approach. Neurons in the brain are interconnected through synapses, and they work in parallel to process information. Neuromorphic chips use artificial neurons and synapses to replicate this architecture. These artificial neurons and synapses are typically implemented using silicon-based technologies, although other materials are also being explored.
Ability to learn and adapt
One key feature of neuromorphic chips is their ability to learn and adapt. In traditional computing, tasks are performed based on pre-written algorithms and require explicit programming. Neuromorphic chips, however, can change their internal connections (synapses) in response to incoming data, a process akin to learning in the human brain. This adaptability makes them well-suited for tasks like pattern recognition, sensory data processing, and decision-making in unstructured environments.
Energy efficiency is another significant advantage of neuromorphic chips. The human brain is remarkably energy-efficient when compared to traditional computers. Neuromorphic chips emulate this efficiency by using a method called “spiking neural networks” (SNNs). In SNNs, information is processed and transmitted in the form of spikes, which are discrete events that occur only when needed, rather than the continuous signal processing used in conventional computers. This event-driven processing significantly reduces power consumption.
Application of AI brain chips
Applications of neuromorphic chips are diverse and growing. They are particularly useful in areas where real-time processing, low power consumption, and the ability to handle complex, unstructured data are crucial. Examples include autonomous vehicles, where they can process sensory data in real-time; robotics, for more adaptive and efficient processing; and edge computing, where processing data on-site can reduce the need for data transmission to centralized cloud servers.
However, there are challenges in the development and adoption of neuromorphic chips. One major challenge is the complexity of designing and manufacturing these chips, as they require new materials and fabrication techniques. Additionally, developing software and algorithms that can fully utilize their unique architecture is an ongoing area of research.
In summary, neuromorphic chips represent a significant shift in computing, drawing inspiration from the human brain to create hardware that is efficient, adaptable, and capable of learning. Their development is still at a relatively early stage, but they hold great promise for a wide range of applications, particularly in areas where traditional computing architectures fall short. To learn more about the new AI chips being developed by Sam Altman jump over to Bloomberg.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Amazon Web Services (AWS) has recently made an exciting announcement at the AWS re:Invent event introducing two new processors, the Graviton4 and Trainium2. These processors are specifically designed to improve the performance of machine learning training and generative AI applications, making them highly relevant for today’s artificial intelligence explosion.
Amazon Graviton4
The Graviton4 chip is a significant step up from its predecessor, the Graviton3. Users can expect a 30% improvement in computing performance, which means applications will run more smoothly and quickly. This chip also boasts a 50% increase in the number of cores, allowing it to handle multiple tasks simultaneously and boost productivity. Furthermore, with a 75% increase in memory bandwidth, data transfer is more efficient, reducing delays and speeding up processing times.
Amazon Trainium2
For those working with complex databases or engaging in big data analytics, the Amazon EC2 R8g instances powered by Graviton4 are designed to meet your needs. These instances are optimized to enhance the performance of demanding applications, enabling you to process and analyze data at impressive speeds.
Turning to the Trainium2 chip, it’s a game-changer for those involved in machine learning. It offers training speeds that are up to four times faster than the original Trainium chips, which means less time waiting and quicker access to insights. The Trainium2 chip can also be used in EC2 UltraClusters, which can scale up to an incredible 100,000 chips. This level of scalability allows you to tackle complex training tasks, such as foundation models and large language models, with performance that rivals supercomputers.
The Amazon EC2 Trn2 instances, which come equipped with Trainium2 chips, are built for these heavy workloads. They ensure high efficiency, meaning your AI models are trained faster and with less energy consumption, supporting sustainable computing practices.
AWS doesn’t just provide its own silicon it also offers the flexibility to run applications on a variety of processors from other manufacturers like AMD, Intel, and NVIDIA. This diverse ecosystem ensures that you can select the best chip for your specific workload, optimizing both performance and cost.
Energy Efficient
When you use AWS managed services with Graviton4, you’ll notice an improvement in the price performance of your applications. This means you get more computing power for your money, which enhances the value of your investment in cloud infrastructure.
At the heart of AWS’s new chip releases is silicon innovation. AWS is committed to providing cost-effective computing options by developing chip architectures that are tailored to specific workloads. The Graviton4 and Trainium2 chips are not only designed for top-notch performance but also for energy-efficient operation.
The introduction of the Graviton4 and Trainium2 chips is a testament to AWS’s develop cloud infrastructure. Whether you’re managing high-performance databases, exploring big data, or training complex AI models, these chips are crafted to meet your needs. With AWS’s focus on silicon innovation, the future looks bright for cost-effective and environmentally friendly computing solutions that don’t compromise on performance.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Today Apple has unveiled a range of new Apple silicon in the form of the latest M3, M3 Pro, and M3 Max silicon chips. Marking another milestone in Apple’s journey of silicon innovation, propelling the tech giant further into the realm of portable high-performance computing. The M3 family of chips have been built using 3-nanometer technology, offering users unprecedented performance and efficiency.
Apple M3, M3 Pro, and M3 Max silicon
One of the key technologies within the M3 chips is Dynamic Caching. This innovative feature increases GPU utilization and performance by allocating the use of local memory in hardware in real time. This allows for a more efficient use of resources, leading to improved performance. Additionally, the GPU in the M3 chips introduces new rendering features to Apple silicon. These include hardware-accelerated mesh shading and hardware-accelerated ray tracing. These features enable more visually complex scenes and more realistic gaming environments, enhancing the overall user experience.
These new additions promise to set new benchmarks for both performance and efficiency. Let’s dive into the intricate details that make these chips so exceptional.
First and foremost, the graphics processing unit (GPU) in the M3 family showcases a major stride in architecture. The introduction of Dynamic Caching allocates local memory in hardware, in real-time. This means that the GPU utilizes only the precise amount of memory required for each task. This is not just an incremental update; it’s an industry-first approach that markedly boosts GPU utilization.
Dynamic Caching: Allocates exactly the memory needed for each task, in real-time.
Increased GPU Utilization: Enhances performance for graphics-intensive applications and games.
With the M3 chips, Mac users get their first taste of hardware-accelerated ray tracing. If you’re wondering how this will benefit you, ray tracing simulates the properties of light interacting with objects in a scene, yielding incredibly realistic images. This is a boon for game developers who can now render shadows and reflections with unprecedented accuracy. Add to this the hardware-accelerated mesh shading, and you have a potent combination for creating visually complex scenes.
Ray Tracing: Models the properties of light for ultra-realistic images.
Mesh Shading: Increases capability and efficiency in geometry processing.
Now, let’s switch gears and talk about the central processing unit (CPU). The M3, M3 Pro, and M3 Max offer architectural improvements to both performance and efficiency cores. You’ll be thrilled to find that tasks like compiling millions of lines of code in Xcode or playing hundreds of audio tracks in Logic Pro are going to be faster and more efficient.
Performance Cores: Up to 30% faster than M1.
Efficiency Cores: Up to 50% faster than M1.
Another highlight is the unified memory architecture that features high bandwidth, low latency, and unmatched power efficiency. This architecture enables all technologies in the chip to access a single pool of memory, streamlining performance and reducing memory requirements.
Unified Memory Architecture: Streamlines performance and reduces memory requirements.
Moving on to specialized engines for Artificial Intelligence (AI) and video, the enhanced Neural Engine in the M3 family accelerates machine learning models at a pace that’s up to 60% faster than its predecessors. Additionally, the media engine has been fine-tuned to provide hardware acceleration for popular video codecs, thus extending battery life.
Neural Engine: 60% faster, enhancing AI and machine learning workflows.
Media Engine: Supports hardware acceleration for popular video codecs.
Last but not least, the M3 Max takes professional performance to new heights with its astonishing 92 billion transistors and support for up to 128GB of unified memory. This makes it ideal for those tackling the most demanding workloads, including AI development and high-resolution video post-production.
M3 MacBook Pro laptops
As well as announcing its new M3, M3 Pro, and M3 Max silicon, Apple also unveiled a new MacBook Pro lineup is designed to cater to a wide range of users, from everyday consumers to professional creatives and researchers. Each model is equipped with one of the new M3 chips, which offer a next-generation GPU architecture and a faster CPU.
“There is nothing quite like MacBook Pro. With the remarkable power-efficient performance of Apple silicon, up to 22 hours of battery life, a stunning Liquid Retina XDR display, and advanced connectivity, MacBook Pro empowers users to do their life’s best work,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With the next generation of M3 chips, we’re raising the bar yet again for what a pro laptop can do. We’re excited to bring MacBook Pro and its best-in-class capabilities to the broadest set of users yet, and for those upgrading from an Intel-based MacBook Pro, it’s a game-changing experience in every way.”
Other articles you may find of interest on the subject of Apple and its latest products :
The 14-inch MacBook Pro with the M3 chip is an ideal choice for everyday tasks, professional applications, and gaming. Priced at $1,599, this model offers a balance of performance and affordability. For those requiring more power for demanding workflows, the 14- and 16-inch MacBook Pro models with the M3 Pro chip are perfect. These models are designed to meet the needs of coders, creatives, and researchers, offering greater performance and additional unified memory support.
For power users seeking extreme performance and capabilities, the 14- and 16-inch MacBook Pro with the M3 Max chip is the ultimate choice. With a powerful GPU and CPU, and support for up to 128GB of unified memory, this model is tailor-made for machine learning programmers, 3D artists, and video editors. The M3 Pro and M3 Max models are also available in a sleek space black finish, adding an aesthetic appeal to their robust performance.
Beyond the chips, all MacBook Pro models are equipped with a range of cutting-edge features. These include a Liquid Retina XDR display that offers stunning visual clarity, a built-in 1080p camera for high-quality video calls, a six-speaker sound system for immersive audio, and various connectivity options for enhanced convenience. Furthermore, these models offer up to 22 hours of battery life, ensuring users can work or play uninterrupted for longer periods.
For more information and full specifications on each of the new MacBook Pro M3 Apple silicon systems jump over to the official website.
Filed Under: Apple, Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.