Categories
Featured

Intel unveils flurry of new Arc GPUs — however serious graphics users will have to wait for more powerful models, as these focus on a completely different and more lucrative market

[ad_1]

Two years after the debut of its Arc Alchemist GPUs, Intel is launching six new Arc products, but these are designed for edge/embedded systems. 

These edge systems, which process data near the source to reduce latency and bandwidth use, are becoming increasingly essential in areas such as IoT, autonomous vehicles, and AI applications.

[ad_2]

Source Article Link

Categories
Politics

Lack of focus doesn’t equal lack of intelligence — it’s proof of an intricate brain

[ad_1]

Imagine a busy restaurant: dishes clattering, music playing, people talking loudly over one another. It’s a wonder that anyone in that kind of environment can focus enough to have a conversation. A new study by researchers at Brown University’s Carney Institute for Brain Science provides some of the most detailed insights yet into the brain mechanisms that help people pay attention amid such distraction, as well as what’s happening when they can’t focus.

In an earlier psychology study, the researchers established that people can separately control how much they focus (by enhancing relevant information) and how much they filter (by tuning out distraction). The team’s new research, published in Nature Human Behaviour, unveils the process by which the brain coordinates these two critical functions.

Lead author and neuroscientist Harrison Ritz likened the process to how humans coordinate muscle activity to perform complex physical tasks.

“In the same way that we bring together more than 50 muscles to perform a physical task like using chopsticks, our study found that we can coordinate multiple different forms of attention in order to perform acts of mental dexterity,” said Ritz, who conducted the study while a Ph.D. student at Brown.

The findings provide insight into how people use their powers of attention as well as what makes attention fail, said co-author Amitai Shenhav, an associate professor in Brown’s Department of Cognitive, Linguistic and Psychological Sciences.

“These findings can help us to understand how we as humans are able to exhibit such tremendous cognitive flexibility — to pay attention to what we want, when we want to,” Shenhav said. “They can also help us better understand limitations on that flexibility, and how limitations might manifest in certain attention-related disorders such as ADHD.”

The focus-and-filter test

To conduct the study, Ritz administered a cognitive task to participants while measuring their brain activity in an fMRI machine. Participants saw a swirling mass of green and purple dots moving left and right, like a swarm of fireflies. The tasks, which varied in difficulty, involved distinguishing between the movement and colors of the dots. For example, participants in one exercise were instructed to select which color was in the majority for the rapidly moving dots when the ratio of purple to green was almost 50/50.

Ritz and Shenhav then analyzed participants’ brain activity in response to the tasks.

Ritz, who is now a postdoctoral fellow at the Princeton Neuroscience Institute, explained how the two brain regions work together during these types of tasks.

“You can think about the intraparietal sulcus as having two knobs on a radio dial: one that adjusts focusing and one that adjusts filtering,” Ritz said. “In our study, the anterior cingulate cortex tracks what’s going on with the dots. When the anterior cingulate cortex recognizes that, for instance, motion is making the task more difficult, it directs the intraparietal sulcus to adjust the filtering knob in order to reduce the sensitivity to motion.

“In the scenario where the purple and green dots are almost at 50/50, it might also direct the intraparietal sulcus to adjust the focusing knob in order to increase the sensitivity to color. Now the relevant brain regions are less sensitive to motion and more sensitive to the appropriate color, so the participant is better able to make the correct selection.”

Ritz’s description highlights the importance of mental coordination over mental capacity, revealing an often-expressed idea to be a misconception.

“When people talk about the limitations of the mind, they often put it in terms of, ‘humans just don’t have the mental capacity’ or ‘humans lack computing power,'” Ritz said. “These findings support a different perspective on why we’re not focused all the time. It’s not that our brains are too simple, but instead that our brains are really complicated, and it’s the coordination that’s hard.”

Ongoing research projects are building on these study findings. A partnership with physician-scientists at Brown University and Baylor College of Medicine is investigating focus-and-filter strategies in patients with treatment-resistant depression. Researchers in Shenhav’s lab are looking at the way motivation drives attention; one study co-led by Ritz and Brown Ph.D. student Xiamin Leng examines the impact of financial rewards and penalties on focus-and-filter strategies.

The study was funded by the National Institutes of Health (R01MH124849, S10OD02518), the National Science Foundation (2046111) and by a postdoctoral fellowship from the C.V. Starr Foundation.

[ad_2]

Source Article Link

Categories
News

ChatGPT alternative Groq focus on high-speed responses

ChatGPT alternative Groq focus on high-speed responses

In the fast-paced world of artificial intelligence, a new ChatGPT alternative has emerged, promising to transform the way we interact with AI chatbots. Groq, a platform that’s gaining traction and on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life.

Groq offers users a breakthrough in response times that could redefine digital customer service. At the core of Groq’s innovation is a unique piece of hardware known as the Language Processing Unit (LPU) a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component to them, such as AI language applications (LLMs). This specialized processor is engineered specifically for language tasks, enabling it to outperform conventional processors in both speed and accuracy.

Language Processing Unit (LPU)

The LPU is designed to overcome the two LLM bottlenecks: compute density and memory bandwidth. An LPU has greater compute capacity than a GPU and CPU in regards to LLMs. This reduces the amount of time per word calculated, allowing sequences of text to be generated much faster. Additionally, eliminating external memory bottlenecks enables the LPU Inference Engine to deliver orders of magnitude better performance on LLMs compared to GPUs.

The LPU is the cornerstone of Groq’s capabilities. It’s not just about being fast; it’s about understanding and processing the intricacies of human language with precision. This is particularly important when dealing with complex language models that need to interpret and respond to a wide array of customer inquiries. The result is a chatbot that doesn’t just reply quickly but does so with a level of understanding that closely mimics human interaction.

Speed is a critical factor in today’s AI chatbots. In an era where consumers expect immediate results, the ability to provide swift customer service is invaluable. Groq’s platform is designed to meet these expectations, offering businesses and developers a way to enhance user experience significantly. By ensuring that interactions are not only prompt but also meaningful, Groq provides a competitive advantage that can set companies apart in the marketplace.

Groq a faster ChatGPT alternative

Here are some other articles you may find of interest on the subject of ChatGPT alternatives :

One of the standout features of Groq’s platform is its support for open-source language models, such as the one developed by Meta called LLaMA. This approach allows for a high degree of versatility and a wide range of potential applications. By not restricting users to a single model, Groq’s platform encourages innovation and adaptation, which is crucial in the ever-evolving field of AI.

Recognizing the varied needs of different businesses, Groq has made customization and integration a priority. The platform offers developers easy access to APIs, allowing them to weave Groq’s capabilities into existing systems effortlessly. This adaptability is key for companies that want to maintain their unique brand voice while providing efficient service. Groq supports standard machine learning (ML) frameworks such as PyTorch, TensorFlow, and ONNX for inference. Groq does not currently support ML training with the LPU Inference Engine.

For custom development, the GroqWare suite, including Groq Compiler, offers a push-button experience to get models up and running quickly. For optimizing workloads, we offer the ability to hand code to the Groq architecture and fine-grained control of any GroqChip™ processor, enabling customers the ability to develop custom applications and maximize their performance.

How to use Groq

If you want to get started with Groq. Here are some of the fastest ways to get up and running:

  • GroqCloud: Request API access to run LLM applications in a token-based pricing model
  • Groq Compiler: Compile your current application to see detailed performance, latency, and power utilization metrics. Request access via our Customer Portal.

Despite its advanced technology, Groq has managed to position itself as an affordable solution. The combination of cost-effectiveness, a powerful LPU, and extensive customization options makes Groq an attractive choice for businesses looking to implement AI chatbots without breaking the bank.

It’s important to address a common point of confusion: Groq, spelled with a ‘Q’, should not be mistaken for ‘Grok’ on Twitter. Groq is a dedicated hardware company focused on AI processing, while ‘Grok’ refers to something entirely different. This distinction is crucial for those researching AI solutions to avoid any potential mix-up.

Groq’s AI chatbot platform is poised to set new standards for speed and efficiency in the industry. With its advanced LPU, compatibility with open-source models, and customizable features, Groq is establishing itself as a forward-thinking solution for businesses and developers. As AI technology continues to progress, platforms like Groq are likely to lead the way, shaping the future of our interactions with technology

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

2024 Toyota Corolla range to focus on technology

2024 Toyota Corolla

Toyota has revealed that it will focus on technology with the new 2024 Toyota Corolla, there will be various models in the range including a new Corolla Hatchback, Touring Sport, and Sedan.

Toyota has focused on new, advanced technologies in its programme of upgrades for the 2024 Corolla range, adding further convenience and sophistication to its highly successful range of mid-size Hatchback, Touring Sports and Sedan models.

The new line-up offers the convenience of a smartphone-based digital key and introduces the nanoe-X™ air quality system, combating viruses, bacteria, allergens and bad odours inside the vehicle. New colour options are also available, including an on-trend Super Green metallic.

These new features add to the increased appeal delivered by the Corolla’s adoption of fifth generation Toyota hybrid electric technology, giving higher performance and extended all-electric EV driving capability while reducing fuel consumption and emissions.

Corolla owners can enjoy the convenience of a new Smart Digital Key. This allows the car to be accessed and driven via a smartphone as an alternative to a physical key fob.

The Smart Digital Key is compatible with Apple and Android phones and allows up to five individual profiles to be stored for each vehicle. An authorised user only has to have their phone on their person to unlock the door and start the car– there is no need to call up an app to gain access.

You can find out more details about the new 2024 Toyota Corolla range over at Toyota at the link below, as yet there are no details on pricing etc for the new models.

Source Toyota

Filed Under: Auto News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.