Categories
Business Industry

Samsung Galaxy S21 FE joins the One UI 6.1 update party

[ad_1]

Samsung has released the One UI 6.1 update for the Galaxy S21 FE, a couple of days earlier than expected. The Galaxy S21, S21+, and S21 Ultra got One UI 6.1 a few days back, but as is often the case, the Fan Edition model was not included in the initial rollout of the latest One UI update.

Galaxy S21 One UI 6.1 update debuts in the USA

The USA is the first market where the Galaxy S21 FE has received the One UI 6.1 update. The update only seems available for those using the phone on T-Mobile’s network, but we can expect a rollout for other carrier variants and unlocked units soon. It shouldn’t take long for the update to go live in other markets, either.

Samsung could also start making One UI 6.1 available for mid-range devices like the Galaxy A54 and Galaxy A53 in the next few days. However, there is no official confirmation from the Korean giant, only hints provided by a Canadian carrier, so we would suggest keeping your expectations in check until the rollout actually begins.

If you own a Galaxy S21 FE, you can discover all that the One UI 6.1 update brings to your phone in the official changelog available here. Just don’t expect any of the fancy Galaxy AI features. Google’s Circle to Search is the only major AI feature that One UI 6.1 brings to flagship Galaxy phones launched in 2021.

To download the One UI 6.1 update, open the Settings app on your phone, select Software update, then tap Download and install. Considering the issues Galaxy S22 owners faced with One UI 6.1, you may want to follow some of the advice here before installing the update on your S21 FE.

Galaxy S21 FE One UI 6.1 update

Thanks for the tip, Amir!

[ad_2]

Source Article Link

Categories
Entertainment

The Morning After: Hulu officially joins Disney+

[ad_1]

A month after taking full ownership of Hulu last November, Disney started beta testing integration with Disney+. Today, Hulu on Disney+ is officially out of beta, making it easy for subscribers to access content for both services. It’s also a way for Disney to push its Hulu bundle, which starts at $9.99 a month with ads. And if you want to go ad-free and download content for offline viewing, there’s the Duo Premium bundle for $19.99 a month.

All your favorite Hulu content is in its own tab, but the big shows (like Shogun) will feature in the main show carousel too. However, if you’re a long-running Hulu viewer, you’ll lose your viewing progress on things you’ve already watched or half-watched.

— Mat Smith

The biggest stories you might have missed

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

GLAAD found plenty of policy violations where Meta took no action.

Surprise! Meta is failing to enforce its own rules against anti-trans hate speech on its platforms. GLAAD warns that “extreme anti-trans hate content remains widespread across Instagram, Facebook and Threads.” It reported on dozens of examples of hate speech from Meta’s apps, between June 2023 and March 2024. Despite the posts clearly violating Meta’s policies, the company either claimed “posts were not violative or simply did not take action on them,” according to GLAAD. The group also shared two examples of posts from Threads, Meta’s newest app where the company has tried to “political” content and other “” topics.

GLAAD’s report isn’t the first time Meta’s been criticized for not protecting LGBTQIA+ users. Last year, its own Oversight Board to “improve the accuracy of its enforcement on hate speech towards the LGBTQIA+ community.”

Continue reading.

You can play as Black Panther, Spider-Man, Magneto and more.

TMATMA

Marvel Games

Marvel Rivals is a third-person 6v6 team-based shooter that sounds very Overwatch-like. It’ll be free to play, and it’s set inside of a “continually evolving universe,” which probably means new levels, new characters and new gameplay modes over time. Testers will be able to play as Spider-Man, Black Panther, Magneto, Magik and eight or nine more unannounced characters. The developers added Rocket Raccoon, Groot, Hulk and Iron Man would also eventually be playable. The alpha will be available in May for PC players. There’s no word on a console release.

Continue reading.

Eight years after launch.

Yes, No Man’s Sky is still getting major updates. Developer Hello Games’ next update, due Wednesday, adds procedurally generated space stations (so they’ll be different every time), a ship editor and a Guild system to the nearly eight-year-old space exploration sim. The stations’ broader scale will be evident from the outside, while their interiors will include new shops, gameplay and things to do, including interacting with all those guilds.

Continue reading.

[ad_2]

Source Article Link

Categories
News

Intel joins the MLCommons AI Safety Working Group

Intel joins the MLCommons AI Safety Working Group

Intel is making strides in the field of artificial intelligence (AI) safety and recently become a founding member of the AI Safety (AIS) working group, organized by MLCommons. This marks a significant step in Intel’s ongoing commitment to responsibly advancing AI technologies.

What is the MLCommons AI Safety Working Group?

The MLCommons AI Safety Working Group has a comprehensive mission to support the community in developing AI safety tests and to establish research and industry-standard benchmarks based on these tests. Their primary goal is to guide responsible development of AI systems, drawing inspiration from how computing performance benchmarks like MLPerf have helped to set concrete objectives and thereby accelerate progress. In a similar vein, the safety benchmarks developed by this working group aim to provide a clear definition of what constitutes a “safer” AI system, which could significantly speed up the development of such systems.

Another major purpose of the benchmarks is to aid consumers and corporate purchasers in making more informed decisions when selecting AI systems for specific use-cases. Given the complexity of AI technologies, these benchmarks offer a valuable resource for evaluating the safety and suitability of different systems.

Additionally, the benchmarks are designed to inform technically sound, risk-based policy regulations. This comes at a time when governments around the world are increasingly focusing on the safety of AI systems, spurred by public concern.

To accomplish these objectives, the working group has outlined four key deliverables.

  1. They curate a pool of safety tests and work on developing better testing methodologies.
  2. They define benchmarks for specific AI use-cases by summarizing test results in an easily understandable manner for non-experts.
  3. They are developing a community platform that will serve as a comprehensive resource for AI safety testing, from registering tests to viewing benchmark scores.
  4. They are working on defining a set of governance principles and policies through a multi-stakeholder process to ensure that decisions are made in a trustworthy manner. The group holds weekly meetings to discuss these topics and anyone interested in joining can sign up via their organizational email.

Other articles we have written that you may find of interest on the subject of artificial intelligence:

AIS working group

The AIS working group is a collective of AI experts from both industry and academia. As a founding member, Intel is set to contribute its vast expertise to the creation of a platform for benchmarks that measure the safety and risk factors associated with AI tools and models. This collaborative effort is geared towards developing standard AI safety benchmarks as testing matures, a crucial step in ensuring AI deployment and safety in society.

One of the key areas of focus for the AIS working group, and indeed for Intel, is the responsible training and deployment of large language models (LLMs). These powerful AI tools have the capacity to generate human-like text, making them invaluable across a range of applications from content creation to customer service. However, their potential misuse poses significant societal risks, making the development of safety benchmarks for LLMs a priority for the working group.

To aid in evaluating the risks associated with rapidly evolving AI technologies, the AIS working group is also developing a safety rating system. This system will provide a standardized measure of the safety of various AI tools and models, helping industry and academia alike to make informed decisions about their use and deployment.

“Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we’re pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere,” said Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions.

Intel’s participation in the AIS working group aligns with its commitment to the responsible advancement of AI technologies. The company plans to share its AI safety findings, best practices, and responsible development processes such as red-teaming and safety tests with the group. This sharing of knowledge and expertise is expected to aid in the establishment of a common set of best practices and benchmarks for the safe development and deployment of AI tools.

The initial focus of the AIS working group is to develop safety benchmarks for LLMs. This effort will build on research from Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel will also share its internal review processes used to develop AI models and tools with the AIS working group. This collaboration is expected to contribute significantly to the establishment of a common set of best practices and benchmarks for the safe development and deployment of generative AI tools leveraging LLMs.

Intel’s involvement in the MLCommons AI Safety working group is a significant step in the right direction towards ensuring the responsible development and deployment of AI technologies. The collaborative efforts of this group will undoubtedly contribute to the development of robust safety benchmarks for AI tools and models, ultimately mitigating the societal risks posed by these powerful technologies.

Source and Image Credit :  Intel

Filed Under: Technology News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.