Categories
Featured

Apple’s new Final Cut Pro apps turn the iPad into an impressive live multicam studio

[ad_1]

At Let Loose 2024, Apple revealed big changes coming to its Final Cut software, ones that effectively turn your iPad into a mini production studio. Chief among these is the launch of Final Cut Pro for iPad 2. It’s a direct upgrade to the current app that is capable of taking full advantage of the new M4 chipset. According to the company, it can render videos up to twice as fast as Final Cut Pro running on an M1 iPad.

Apple is also introducing a feature called Live Multicam. This allows users to connect their tablet to up to four different iPhones or iPads at once and watch a video feed from all the sources in real time. You can even adjust the “exposure, focus, [and] zoom” of each live feed directly from your master iPad.

[ad_2]

Source Article Link

Categories
Featured

I’ve seen Sony’s impressive new mini-LED TV backlight tech in action, and OLED TVs should be worried

[ad_1]

Sony made a special occasion of its 2024 TV launch, holding it at the Sony Pictures Studios lot in Los Angeles. At the event, attendees, myself included, were treated to demos of Foley effects and soundtrack mixing, plus other striking examples of behind-the-scenes movie magic that happens at the studio. Sony’s message was that the technology that goes into movie and TV creation via its studio and professional camera and display divisions trickles down into consumer products, and it was made loud and clear at the event.

The Sony Bravia 9 is the flagship model of the new Bravia series TVs, taking that crown from the Sony A95L OLED TV, which will continue in the lineup for 2024. Interestingly, the Bravia 9 is a mini-LED TV. That marks a change in direction for Sony, a brand that in the past had regularly positioned OLED as the most premium technology in its TV lineup.

Sony’s re-positioning of mini-LED at the top of the TV food chain results from two tech developments at the company. The first is the creation of the BVM-HX3110, a professional mastering monitor capable of 4,000 nits peak brightness. The BVM-HX3110 was introduced in late 2023, and replaces the BVM-HX310, a standard model for movie post-production that tops out at 1,000 nits peak brightness.

The backlight LED driver panel used in Sony's Bravia 9 TVs

The backlight LED driver panel used in Sony’s Bravia 9 TVs. Those tiny black stripes are the mini-LED modules. (Image credit: Future)

The second development is XR Backlight Master Drive with High Peak Luminance, a new TV backlight technology used exclusively in the Sony Bravia 9 mini-LED TV. According to Sony, its next-gen backlight tech is responsible for a 50% brightness boost in the Bravia 9 over the company’s previous flagship mini-LED model, the Sony X95L, along with a 325% increase in local dimming zones – something it accomplishes through a new, highly miniaturized 22-bit LED driver.



[ad_2]

Source Article Link

Categories
Life Style

Eclipse watchers stunned by impressive ‘red dot’ prominences

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

Full solar eclipse on April 8, 2024.

Bright red spots called prominences appeared along the solar disk during the total eclipse.Credit: Sumeet Kulkarni/Nature

Yesterday’s total eclipse stunned skywatchers. “It makes your heart want to skip a beat, and you cannot really describe it to someone who hasn’t experienced it in person,” retired educator Lynnice Carter told Nature. Some people could spot impressive solar prominences as reddish dots around the edges of the moon’s shadow. Prominences are enormous loops of plasma, many times bigger than Earth, that can last several months. They often appear red because they contain hydrogen glowing at extremely high temperatures.

Scientific American | 4 min read

An animated slideshow of three images showing an eclipse over a lake, the violet ‘ring of fire’ around the eclipsed Sun and a dog wearing eclipse glasses.

Thank you to everyone who’s shared their eclipse images with us! Connie Friedman’s view from a canoe on Lake Erie, Beth Peshkin’s portrait of Carly the dog putting safety first and Les Jones’s image of totality in Kingston, Canada, are among our favourites so far.

Modern Blackfoot people are closely related to the first humans that populated the Americas after the last ice age. DNA analysis of six modern and seven historic individuals shows that they belong to a previously undescribed genetic lineage that extends back to more than 18,000 years ago. The data add to evidence from Blackfoot oral traditions and archaeological findings, and could support the claims that these people have to ancestral lands.

Science | 5 min read

Reference: Science Advances paper

Some scientists in Brazil say their labs won’t have enough money to cover basic expenses such as electricity and water unless more funding is found. Institutions in the Amazon argue that they are the hardest hit because their federal support is already disproportionately low. President Luiz Inácio Lula da Silva’s administration is fighting to reverse some of the budget cuts imposed by the country’s legislators.

Nature | 5 min read

Features & opinion

The very different ways that communities of desert ants and forest ants find their food demonstrates how our unpredictably messy world drives the evolution of social behaviours, argues biologist Deborah Gordon in The Ecology of Collective Behavior. The idea is not as contentious as Gordon makes out, writes reviewer and ecologist Seirian Sumner. But it still highlights a crucial point: “The interactions between organisms and their environments have become increasingly overlooked because fewer researchers are studying animals in their natural environments.”

Nature | 6 min read

“I tried to compensate for my disability by working longer hours,” recalls biochemist Kamini Govender, who has a condition that severely affects her peripheral vision. She developed coping strategies, but ended up working at an unsustainable pace. “Over time, I have learnt to practise better self-care by knowing when to stop.” More needs to be done to include people with disabilities, Govender says. “In the sciences, few of these people make it to the level that I have, because of all the hurdles they come across. It’s easier to quit and give up.”

Nature | 6 min read

Several student-led groups and conferences are working to ensure that they have a part in determining AI’s role in education. Students recognize that the technology can be a double-edged sword, but caution against knee-jerk blanket bans. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” says computer science student Johnny Chang.

Nature | 9 min read

Where I work

Dario Sandrini hikes the grounds of the Anse La Roche Nature Reserve in the North of Carriacou, Grenadines.

Dario Sandrini is director of the KIDO Foundation, Carriacou Island, Grenadines of Grenada.Credit: Micah B Rubin for Nature

Dario Sandrini’s environment and education foundation, KIDO, has run around 30 projects on the small Caribbean island of Carriacou — from protecting sea turtles to replanting mangroves. He’s now working on restoring areas that have been logged, in some cases, illegally. “With another ten years of care, we will see the forest resurge,” he says. (Nature | 3 min read)

Quote of the day

Don’t judge other people on the basis of the cognitive bias he co-discovered, says social psychologist David Dunning. Use the fact that people with limited competence in an area overestimate their expertise to reflect on yourself, instead. (Scientific American podcast | 33 min listen or 11 min read)

Today, I’m enjoying biologist-comedian Adam Ruben’s musings on those physically repetitive tasks that are part of many scientists’ lives. “So many accomplishments in science are vaporous,” Ruben writes. Although manual lab work can be unbelievably boring, it can also be incredibly satisfying. “It meant I had accomplished something tangible,” he says.

Please tell me about your favourite dull (lab) tasks, alongside any other feedback on this newsletter, by sending an email to [email protected].

Thanks for reading,

Katrina Krämer, associate editor, Nature Briefing

With contributions by Flora Graham and Sarah Tomlin

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering

Nature Briefing: AI & Robotics — 100% written by humans, of course

Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

[ad_2]

Source Article Link

Categories
Featured

30 years of the Dell Latitude – from impressive battery-powered productivity to AI PCs

[ad_1]

For many decades, businesses in the US and across the world have been relying on specially-engineered business laptops that have boosted productivity immeasurably. 

One such model that’s been a staple for 30 years is the Dell Latitude family of enterprise laptops – starting with the Dell Latitude XP in 1994. Since then, Dell has continued manufactured a series of machines widely considered among the best business laptops, but it’s worth casting our eye back to the Latitude – the machine that started it all. 

[ad_2]

Source Article Link

Categories
News

Mistral AI Mixtral 8x7B mixture of experts AI model impressive benchmarks revealed

Mistral AI mixture of experts model MoE creates impressive benchmarks

Mistral AI has recently unveiled an innovative mixture of experts model that is making waves in the field of artificial intelligence. This new model, which is now available through Perplexity AI at no cost, has been fine-tuned with the help of the open-source community, positioning it as a strong contender against the likes of the well-established GPT-3.5. The model’s standout feature is its ability to deliver high performance while potentially requiring as little as 4 GB of VRAM, thanks to advanced compression techniques that preserve its effectiveness. This breakthrough suggests that even those with limited hardware resources could soon have access to state-of-the-art AI capabilities. Mistral AI explain more about the new Mixtral 8x7B :

“Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.”

The release of Mixtral 8x7B by Mistral AI marks a significant advancement in the field of artificial intelligence, specifically in the development of sparse mixture of experts models (SMoEs). This model, Mixtral 8x7B, is a high-quality SMoE with open weights, licensed under Apache 2.0. It is notable for its performance, outperforming Llama 2 70B on most benchmarks while offering 6x faster inference. This makes Mixtral the leading open-weight model with a permissive license, and it is highly efficient in terms of cost and performance trade-offs, even matching or surpassing GPT3.5 on standard benchmarks​​.

Mixtral 8x7B exhibits several impressive capabilities. It can handle a context of 32k tokens and supports multiple languages, including English, French, Italian, German, and Spanish. Its performance in code generation is strong, and it can be fine-tuned into an instruction-following model, achieving a score of 8.3 on MT-Bench​​.

Mistral AI mixture of experts model MoE

The benchmark achievements of Mistral AI’s model are not just impressive statistics; they represent a significant stride forward that could surpass the performance of existing models such as GPT-3.5. The potential impact of having such a powerful tool freely available is immense, and it’s an exciting prospect for those interested in leveraging AI for various applications. The model’s performance on challenging datasets, like H SWAG and MML, is particularly noteworthy. These benchmarks are essential for gauging the model’s strengths and identifying areas for further enhancement.

Here are some other articles you may find of interest on the subject of Mistral AI :

The architecture of Mixtral is particularly noteworthy. It’s a decoder-only sparse mixture-of-experts network, using a feedforward block that selects from 8 distinct groups of parameters. A router network at each layer chooses two groups to process each token, combining their outputs additively. Although Mixtral has 46.7B total parameters, it only uses 12.9B parameters per token, maintaining the speed and cost efficiency of a smaller model. This model is pre-trained on data from the open web, training both experts and routers simultaneously​​.

In comparison to other models like the Llama 2 family and GPT3.5, Mixtral matches or outperforms these models in most benchmarks. Additionally, it exhibits more truthfulness and less bias, as evidenced by its performance on TruthfulQA and BBQ benchmarks, where it shows a higher percentage of truthful responses and presents less bias compared to Llama 2​​​​.

Moreover, Mistral AI also released Mixtral 8x7B Instruct alongside the original model. This version has been optimized through supervised fine-tuning and direct preference optimization (DPO) for precise instruction following, reaching a score of 8.30 on MT-Bench. This makes it one of the best open-source models, comparable to GPT3.5 in performance. The model can be prompted to exclude certain outputs for applications requiring high moderation levels, demonstrating its flexibility and adaptability​​.

To support the deployment and usage of Mixtral, changes have been submitted to the vLLM project, incorporating Megablocks CUDA kernels for efficient inference. Furthermore, Skypilot enables the deployment of vLLM endpoints in cloud instances, enhancing the accessibility and usability of Mixtral in various applications​

AI fine tuning and training

The training and fine-tuning process of the model, which includes instruct datasets, plays a critical role in its success. These datasets are designed to improve the model’s ability to understand and follow instructions, making it more user-friendly and efficient. The ongoing contributions from the open-source community are vital to the model’s continued advancement. Their commitment to the project ensures that the model remains up-to-date and continues to improve, embodying the spirit of collective progress and the sharing of knowledge.

As anticipation builds for more refined versions and updates from Mistral AI, the mixture of experts model has already established itself as a significant development. With continued support and development, it has the potential to redefine the benchmarks for AI performance.

Mistral AI’s mixture of experts model is a notable step forward in the AI landscape. With its strong benchmark scores, availability at no cost through Perplexity AI, and the support of a dedicated open-source community, the model is well-positioned to make a lasting impact. The possibility of it operating on just 4 GB of VRAM opens up exciting opportunities for broader access to advanced AI technologies. The release of Mixtral 8x7B represents a significant step forward in AI, particularly in developing efficient and powerful SMoEs. Its performance, versatility, and advancements in handling bias and truthfulness make it a notable addition to the AI technology landscape.

Image Credit: Mistral AI

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Google’s New Gemini AI Language Model Is Impressive (Video)

Google Gemini

Google Gemini is Google’s latest AI language model, it will be available in three different models, Gemini Ultra which is coming next year, Gemini Pro which will be available in Bard and Gemini Nano which is coming to devices like the Google Pixel Pro.

Now Google has released a video of its new Language Model in action and we get to see how it performs. The video below shows someone interacting with Gemini, let’s find out how it performs and what it can do.

As we can see from the video Gemini is very impressive in the various visual interactions, it can determine exactly what the person is doing and what objects it is being shown, it is also able to make choices based on the objects and drawings, etc that it has been shown.

Google is building Gemini Pro into Google Bard and this is happening at the moment, it will also bring its Gemini Nano model to its mobile devices starting with the Google Pixel 8 Pro smartphone.

The top model which is the new Gemini Ultra will be launching in early 2024, it is not clear as yet whether Google will make this free like with the other version or whether Ultra will be a paid model like OpenAI’s ChatGPT-4, we are looking forward to finding out exactly what they have planned for their new AI models. You can find out more details over at Google’s website at the link below.

Source Google

Image Credit: Google/YouTube

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

BMW M4 GT4 has an impressive first year

BMW M4 GT4

BMW has revealed that its BMW M4 GT4 race car has had an impressive first year in 2023, the car ended up with 180 podium finishes in its first year and this includes more than 70 class victories.

BMW has revealed that they will be producing an extra 50 units of the car due to high demand for it for race teams and they are also working on a new EVO Version of the BMW M4 GT4.

BMW M Motorsport teams celebrated victories and titles with the BMW M4 GT4 in Germany, Europe, Asia, and North America. The pleasingly long list of major successes includes: the SP10 class victory for FK Performance Motorsport at the 24h Nürburgring (GER), the SP10 overall victory in the Nürburgring Endurance Series (NLS), the title win in the GT4 European Series for Hofor Racing by Bonk Motorsport, the title wins for Turner Motorsport in the IMSA Michelin Pilot Challenge and the IMSA VP Racing SportsCar Challenge, the title win for Auto Technic Racing in the GT4 America, and the title win in the GT4 Asia for YZ Racing with BMW Team Studie.

“What a debut year for the new BMW M4 GT4! We were confident enough before the season to assume that we could continue the successes of its predecessor with this car, but the large number of victories and title wins that our teams and drivers were able to achieve right away was even a positive surprise for us,” said Björn Lellmann, Head of Customer Racing at BMW M Motorsport. “However, we are not settling on these successes. Due to the high demand, we are currently producing 50 additional units of the BMW M4 GT4, which will be delivered to our customers in 2024. In addition, we have decided to develop an EVO version of the BMW M4 GT4 after the BMW M4 GT3. The goal is to make an already very strong car even better with the help of our customers’ feedback.”

You can find out more details about the BMW M4 GT4 race car over at BMEW’s website at the link below, it will be interesting to see how the car performs in the 2024 racing season.

Source BMW

Filed Under: Auto News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.