Categories
Featured

Spotify Supremium leak reveals what the new tier and some features may look like at launch

[ad_1]

Screenshots have emerged online of what may be the user interface for Spotify’s long-awaited Supremium tier. Or should we say “Enhanced Listening”? This is apparently the new name for the tier according to user OhItsTom who posted seven images of the potential update on Reddit. They reveal what the new tier could look like on desktop and mobile devices. What’s interesting about the pictures is that they show some of the tools and text windows that may be present in the final product. 

Based on the four smartphone pictures, the hi-res audio feature will apparently be known as Spotify Lossless. This set consists of an introductory guide explaining how the whole thing works. It states subscribers can wirelessly stream music files “in up to 24-bit” via Spotify Connect on a compatible device. A Lossless label will seemingly light up letting you know when you’re streaming in the higher format. Spotify is giving users “pro tips” and a troubleshooting tool in case they’re not receiving lossless audio. The last screenshot says you can adjust song quality “and Downloads in Settings.” 

Spotify Lossless on mobile

(Image credit: Future)

Spotify Lossless on desktop

[ad_2]

Source Article Link

Categories
Featured

Apple TV Plus could be next to get an ad-based tier after Netflix, Disney and Amazon

[ad_1]

One of the reasons I love Apple TV Plus so much is that, like the BBC broadcaster, the only ads it runs are those for its own programmes. It’s one of the most viewer-friendly streamers, which means when it recently hiked its prices dramatically that still felt better than going down the ad-supported route like Netflix, Prime Video or Disney Plus. As we reported at the time, Apple TV Plus is “one of the last bastions of ad-free cost-effective streaming services”. But now that appears to be under threat.

According to Business Insider, Apple has recruited a number of ad execs. Its most recent hire is NBCUniversal’s Joseph Cady, who was with the network for 14 years as the executive vice-president of advertising and partnerships with responsibility for both data-driven and targeted TV advertising. Apple has also reportedly been testing a new AI-powered tool, similar to one that Meta and Google utilize, for optimizing App Store ads.

[ad_2]

Source Article Link

Categories
News

Running Mixtral 8x7B Mixture-of-Experts (MoE) on Google Colab’s free tier

Running Mixtral 8x7B MoE in Google Colab

if you are interested in running your very own AI models locally  on your home network or hardware you might be interested that it is possible to run Mixtral 8x7B on Google Colab.  Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0, Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference

The ability to run complex models on accessible platforms is a significant advantage for researchers and developers. The Mixtral 8x7B Mixture of Experts (MoE) model is one such complex AI tool that has been making waves due to its advanced capabilities. However, the challenge of running the new AI model arises when users attempt to run this model on Google Colab’s free tier, which offers only 16GB of Video Random Access Memory (VRAM), while Mixtral 8x7B typically requires a hefty 45GB to run smoothly. This difference in available memory has led to the development of innovative techniques that enable the model to function effectively, even with limited resources.

A recent paper has introduced a method that allows for fast inference by offloading parts of the model to the system’s RAM. This approach is a lifeline for those who do not have access to high-end hardware with extensive VRAM. The Mixtral 8x7B MoE model, designed by MRAI AI, is inherently sparse, meaning it activates only the necessary layers when required. This design significantly reduces the memory footprint, making it possible to run the model on platforms with less VRAM.

The offloading technique is a game-changer when VRAM is maxed out. It transfers parts of the model that cannot be accommodated by the VRAM to the system RAM. This strategy allows users to leverage the power of the Mixtral 8x7B MoE model on standard consumer-grade hardware, bypassing the need for a VRAM upgrade.

Google Colab runing Mixtral 8x7B MoE AI model

Check out the tutorial below kindly created by Prompt Engineering which provides more information on the research paper and how you can run Mixtral 8x7B MoE in Google Colab utilising less memory than normally required.

Here are some other articles you may find of interest on the subject of Mixtral :

Another critical aspect of managing VRAM usage is the quantization of the model. This process involves reducing the precision of the model’s computations, which decreases its size and, consequently, the VRAM it occupies. The performance impact is minimal, making it a smart trade-off. Mixed quantization techniques are employed to ensure that the balance between efficiency and memory usage is just right.

To take advantage of these methods and run the Mixtral 8x7B MoE model successfully, your hardware should have at least 12 GB of VRAM and sufficient system RAM to accommodate the offloaded data. The process begins with setting up your Google Colab environment, which involves cloning the necessary repository and installing the required packages. After this, you’ll need to fine-tune the model parameters, offloading, and quantization settings to suit your hardware’s specifications.

An integral part of the setup is the tokenizer, which processes text for the model. Once your environment is ready, you can feed data into the tokenizer and prompt the model to generate responses. This interaction with the Mixtral 8x7B MoE model allows you to achieve the desired outputs for your projects. However, it’s important to be aware of potential hiccups, such as the time it takes to download the model and the possibility of Google Colab timeouts, which can interrupt your work. To ensure a seamless experience, it’s crucial to plan ahead and adjust your settings to prevent these issues.

Through the strategic application of offloading and quantization, running the Mixtral 8x7B MoE model on Google Colab with limited VRAM is not only possible but also practical. By following the guidance provided, users can harness the power of large AI models on commonly available hardware, opening up new possibilities in the realm of artificial intelligence. This approach democratizes access to cutting-edge AI technology, allowing a broader range of individuals and organizations to explore and innovate in this exciting field.

Image Credit : Prompt Engineering

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.