It was expected that Intel‘s LGA1851 socket would house the tech giant’s next-gen Arrow Lake chips, but for now it seems the company might have another use for it.
At the recent Embedded World conference, Intel unveiled its Meteor Lake-PS architecture for edge systems, the first Core Ultra processor on an LGA socket.
The new SoC design, which integrates the Intel Arc GPU and a neural processing unit, is aimed at enabling generative AI and handling demanding graphics workloads for sectors such as retail, education, smart cities, and industry.
Ultra low TDP
Intel says its Core Ultra processors offer up to 5.02x superior image classification inference performance compared to the 14th Gen Core desktop processors. Applications for the PS series include GenAI-enabled kiosks and smart point-of-sale systems in physical retail stores, interactive whiteboards for advanced classroom experiences, and AI vision-enhanced industrial devices for manufacturing and roadside units.
The new chips are designed with low-power, always-on usage scenarios in mind. This is evident from the fact that none of these chips have a Thermal Design Power higher than 65W. There’s even a low-power version with a 15W rating (12-28 configurable TDP).
Intel says “Moving away from the conventional setup where Intel Core desktop processors are combined with discrete GPUs, the PS series of Intel Core Ultra processors introduce an innovative integration of GPU and AI Boost functionalities directly within the processors, alongside the flexible LGA socket configuration. Offering four times the number of graphics execution units (EUs) compared to their predecessors in the S or desktop series, these processors deliver a powerful alternative for handling AI and graphics-heavy tasks. This design not only negates the necessity for an additional discrete GPU, thereby lowering costs and simplifying the overall design process, it also positions these processors as the go-to solution for those prioritizing efficiency alongside enhanced performance.”
The desktop LGA1851 socket can support 5600MHz DDR5 memory, two PCIe Gen4 SSDs, and four Thunderbolt 4 devices. There is a notable absence of chipset support for Thunderbolt 5, Wi-Fi 7, and PCIe Gen5, however.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The new desktop Intel Meteor Lake chips are not expected to be available until the fourth quarter of 2024. This timeline also coincides with the expected launch of Arrow Lake desktop CPUs, according to the latest industry rumors.
A new open source AI model has emerged that could reshape the way we think about language processing. The Eagle-7B model, a brainchild of RWKV and supported by the Linux Foundation, is making waves with its unique approach to handling language. Unlike the Transformer models that currently dominate the field, Eagle-7B is built on a recurrent neural network (RNN) framework, specifically the RWKV-v5 architecture. This model is not just another iteration in AI technology; it’s a step forward that promises to make language processing faster and more cost-effective.
One of the most striking aspects of Eagle-7B is its commitment to energy efficiency. In a world where the environmental impact of technology is under scrutiny, Eagle-7B stands out for its low energy consumption during training. This makes it one of the most eco-friendly options among large language models (LLMs), a critical consideration for sustainable development in AI.
But Eagle-7B’s prowess doesn’t stop at being green. It’s also a polyglot’s dream, trained on an extensive dataset that includes over 1.1 trillion tokens across more than 100 languages. This extensive training has equipped Eagle-7B to handle multilingual tasks with ease, often performing on par with or even better than much larger models like Falcon 1.5 trillion and Llama 2 trillion.
Eagle-7B – RWKV-v5
Here are some other articles you may find of interest on the subject of AI models
The technical innovation of Eagle-7B doesn’t end with its linguistic abilities. The model’s hybrid architecture, which combines RNNs with temporal convolutional networks (TCNs), brings a host of benefits. Users can expect faster inference times, less memory usage, and the ability to process sequences of indefinite length. These features make Eagle-7B not just a theoretical marvel but a practical tool that can be applied to a wide range of real-world scenarios.
Accessibility is another cornerstone of the Eagle-7B model. Thanks to its open-source licensing under Apache 2, the model fosters collaboration within the AI community, encouraging researchers and developers to build upon its foundation. Eagle-7B is readily available on platforms like Hugging Face, which means integrating it into your projects is a straightforward process.
Features of the Eagle-7B AI model include :
Built on the RWKV-v5 architecture (a linear transformer with 10-100x+ lower inference cost)
Ranks as the world’s greenest 7B model (per token)
Trained on 1.1 Trillion Tokens across 100+ languages
Outperforms all 7B class models in multi-lingual benchmarks
Approaches Falcon (1.5T), LLaMA2 (2T), Mistral (>2T?) level of performance in English evals
Trade blows with MPT-7B (1T) in English evals
All while being an “Attention-Free Transformer”
Is a foundation model, with a very small instruct tune – further fine-tuning is required for various use cases!
We are releasing RWKV-v5 Eagle 7B, licensed as Apache 2.0 license, under the Linux Foundation, and can be used personally or commercially without restrictions
Download from Huggingface, and use it anywhere (even locally)
Use our reference pip inference package, or any other community inference options (Desktop App, RWKV.cpp, etc)
Fine-tune using our Infctx trainer
d continuous performance improvements, ensuring that it remains adaptable and relevant for various applications. Its scalability is a testament to its potential, as it can be integrated into larger and more complex systems, opening up a world of possibilities for future advancements.
The launch of Eagle-7B marks a significant moment in the development of neural networks and AI. It challenges the prevailing Transformer-based models and breathes new life into the potential of RNNs. This model shows that with the right data and training, RNNs can achieve top-tier performance.
Eagle-7B is more than just a new tool in the AI arsenal; it represents the ongoing quest for innovation within the field of neural networks. With its unique combination of RNN and TCN technology, dedication to energy efficiency, multilingual capabilities, and open-source ethos, Eagle-7B is set to play a pivotal role in the AI landscape. As we continue to explore and expand the boundaries of AI technology, keep an eye on how Eagle-7B transforms the standards of language processing.
Image Credit : RWKV
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Today Apple has unveiled a range of new Apple silicon in the form of the latest M3, M3 Pro, and M3 Max silicon chips. Marking another milestone in Apple’s journey of silicon innovation, propelling the tech giant further into the realm of portable high-performance computing. The M3 family of chips have been built using 3-nanometer technology, offering users unprecedented performance and efficiency.
Apple M3, M3 Pro, and M3 Max silicon
One of the key technologies within the M3 chips is Dynamic Caching. This innovative feature increases GPU utilization and performance by allocating the use of local memory in hardware in real time. This allows for a more efficient use of resources, leading to improved performance. Additionally, the GPU in the M3 chips introduces new rendering features to Apple silicon. These include hardware-accelerated mesh shading and hardware-accelerated ray tracing. These features enable more visually complex scenes and more realistic gaming environments, enhancing the overall user experience.
These new additions promise to set new benchmarks for both performance and efficiency. Let’s dive into the intricate details that make these chips so exceptional.
First and foremost, the graphics processing unit (GPU) in the M3 family showcases a major stride in architecture. The introduction of Dynamic Caching allocates local memory in hardware, in real-time. This means that the GPU utilizes only the precise amount of memory required for each task. This is not just an incremental update; it’s an industry-first approach that markedly boosts GPU utilization.
Dynamic Caching: Allocates exactly the memory needed for each task, in real-time.
Increased GPU Utilization: Enhances performance for graphics-intensive applications and games.
With the M3 chips, Mac users get their first taste of hardware-accelerated ray tracing. If you’re wondering how this will benefit you, ray tracing simulates the properties of light interacting with objects in a scene, yielding incredibly realistic images. This is a boon for game developers who can now render shadows and reflections with unprecedented accuracy. Add to this the hardware-accelerated mesh shading, and you have a potent combination for creating visually complex scenes.
Ray Tracing: Models the properties of light for ultra-realistic images.
Mesh Shading: Increases capability and efficiency in geometry processing.
Now, let’s switch gears and talk about the central processing unit (CPU). The M3, M3 Pro, and M3 Max offer architectural improvements to both performance and efficiency cores. You’ll be thrilled to find that tasks like compiling millions of lines of code in Xcode or playing hundreds of audio tracks in Logic Pro are going to be faster and more efficient.
Performance Cores: Up to 30% faster than M1.
Efficiency Cores: Up to 50% faster than M1.
Another highlight is the unified memory architecture that features high bandwidth, low latency, and unmatched power efficiency. This architecture enables all technologies in the chip to access a single pool of memory, streamlining performance and reducing memory requirements.
Unified Memory Architecture: Streamlines performance and reduces memory requirements.
Moving on to specialized engines for Artificial Intelligence (AI) and video, the enhanced Neural Engine in the M3 family accelerates machine learning models at a pace that’s up to 60% faster than its predecessors. Additionally, the media engine has been fine-tuned to provide hardware acceleration for popular video codecs, thus extending battery life.
Neural Engine: 60% faster, enhancing AI and machine learning workflows.
Media Engine: Supports hardware acceleration for popular video codecs.
Last but not least, the M3 Max takes professional performance to new heights with its astonishing 92 billion transistors and support for up to 128GB of unified memory. This makes it ideal for those tackling the most demanding workloads, including AI development and high-resolution video post-production.
M3 MacBook Pro laptops
As well as announcing its new M3, M3 Pro, and M3 Max silicon, Apple also unveiled a new MacBook Pro lineup is designed to cater to a wide range of users, from everyday consumers to professional creatives and researchers. Each model is equipped with one of the new M3 chips, which offer a next-generation GPU architecture and a faster CPU.
“There is nothing quite like MacBook Pro. With the remarkable power-efficient performance of Apple silicon, up to 22 hours of battery life, a stunning Liquid Retina XDR display, and advanced connectivity, MacBook Pro empowers users to do their life’s best work,” said John Ternus, Apple’s senior vice president of Hardware Engineering. “With the next generation of M3 chips, we’re raising the bar yet again for what a pro laptop can do. We’re excited to bring MacBook Pro and its best-in-class capabilities to the broadest set of users yet, and for those upgrading from an Intel-based MacBook Pro, it’s a game-changing experience in every way.”
Other articles you may find of interest on the subject of Apple and its latest products :
The 14-inch MacBook Pro with the M3 chip is an ideal choice for everyday tasks, professional applications, and gaming. Priced at $1,599, this model offers a balance of performance and affordability. For those requiring more power for demanding workflows, the 14- and 16-inch MacBook Pro models with the M3 Pro chip are perfect. These models are designed to meet the needs of coders, creatives, and researchers, offering greater performance and additional unified memory support.
For power users seeking extreme performance and capabilities, the 14- and 16-inch MacBook Pro with the M3 Max chip is the ultimate choice. With a powerful GPU and CPU, and support for up to 128GB of unified memory, this model is tailor-made for machine learning programmers, 3D artists, and video editors. The M3 Pro and M3 Max models are also available in a sleek space black finish, adding an aesthetic appeal to their robust performance.
Beyond the chips, all MacBook Pro models are equipped with a range of cutting-edge features. These include a Liquid Retina XDR display that offers stunning visual clarity, a built-in 1080p camera for high-quality video calls, a six-speaker sound system for immersive audio, and various connectivity options for enhanced convenience. Furthermore, these models offer up to 22 hours of battery life, ensuring users can work or play uninterrupted for longer periods.
For more information and full specifications on each of the new MacBook Pro M3 Apple silicon systems jump over to the official website.
Filed Under: Apple, Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.