Categories
Featured

Are networks ready for Apple Vision Pro?

[ad_1]

The term ‘Metaverse’ originated in Neal Stephenson’s 1992 novel, Snow Crash, where it depicted a virtual realm offering an escape from reality for characters. Stephenson’s visionary concept of the Metaverse has since transitioned from fiction to (virtual) reality over the past decade. Augmented and virtual reality technologies have advanced the concept of the Metaverse, now capable of manipulating the real world and immersing users in digital experiences.

This year marked Apple‘s eagerly anticipated debut into this dynamic technology with the launch of its Apple Vision Pro. Apple refers to its brand of AR and VR metaverse technology as “spatial computing”, that “seamlessly blends digital content with the physical world”, using hand and eye tracking for a fluid digital experience. The technology, coming in with a hefty price tag of $3,499, hasn’t been enough to put off Apple enthusiasts, with analysts suggesting that initial sales have been between 160,000 and 200,000.

[ad_2]

Source Article Link

Categories
Featured

I’m ready to throw out my iRobot Roomba in favor of Samsung’s new Jet Bot Combo AI robot vacuum

[ad_1]

Recently, I’ve had a bit of a ‘mare with my robot vacuum. I’ve been using the iRobot Roomba Combo J7 Plus since I first reviewed it a year and a half ago, and while it’s been a stellar sucker until the last few months, a litany of sudden issues has me wanting to try something new.

It’s not the first iRobot Roomba I’ve tried, and I doubt it’ll be the last, but given its lofty price, I’m pretty surprised by some of its issues. Despite my regular cleaning and maintenance, the mop function has stopped working almost entirely. Suddenly, the vacuum is rubbish at cleaning edges, and despite the really impressive navigation technology I observed during my test, my Combo J7 Plus somehow managed to gouge out a chunk of its camera lens during a cleaning job, meaning obstacle detection is permanently marred. 

[ad_2]

Source Article Link

Categories
Life Style

Ready or not, AI is coming to science education — and students have opinions

[ad_1]

Yan Jun (Leo) Wu speaks into a microphone while opening the Students@AI Conference

Leo Wu, an economics student at Minerva University in San Francisco, California, founded a group to discuss how AI tools can help in education.Credit: AI Consensus

The world had never heard of ChatGPT when Johnny Chang started his undergraduate programme in computer engineering at the University of Illinois Urbana–Champaign in 2018. All that the public knew then about assistive artificial intelligence (AI) was that the technology powered joke-telling smart speakers or the somewhat fitful smartphone assistants.

But, by his final year in 2023, Chang says, it became impossible to walk through campus without catching glimpses of generative AI chatbots lighting up classmates’ screens.

“I was studying for my classes and exams and as I was walking around the library, I noticed that a lot of students were using ChatGPT,” says Chang, who is now a master’s student at Stanford University in California. He studies computer science and AI, and is a student leader in the discussion of AI’s role in education. “They were using it everywhere.”

ChatGPT is one example of the large language model (LLM) tools that have exploded in popularity over the past two years. These tools work by taking user inputs in the form of written prompts or questions and generating human-like responses using the Internet as their catalogue of knowledge. As such, generative AI produces new data based on the information it has already seen.

However, these newly generated data — from works of art to university papers — often lack accuracy and creative integrity, ringing alarm bells for educators. Across academia, universities have been quick to place bans on AI tools in classrooms to combat what some fear could be an onslaught of plagiarism and misinformation. But others caution against such knee-jerk reactions.

Victor Lee, who leads Stanford University’s Data Interactions & STEM Teaching and Learning Lab, says that data suggest that levels of cheating in secondary schools did not increase with the roll-out of ChatGPT and other AI tools. He says that part of the problem facing educators is the fast-paced changes brought on by AI. These changes might seem daunting, but they’re not without benefit.

Educators must rethink the model of written assignments “painstakingly produced” by students using “static information”, says Lee. “This means many of our practices in teaching will need to change — but there are so many developments that it is hard to keep track of the state of the art.”

Despite these challenges, Chang and other student leaders think that blanket AI bans are depriving students of a potentially revolutionary educational tool. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” Chang says. For example, rather than asking AI to write their final papers, students might use AI tools to make flashcards based on a video lecture. “There were a lot of discussions happening [on campus], but always without the students.”

Portrait of Johnny Chang at graduation

Computer-science master’s student Johnny Chang started a conference to bring educators and students together to discuss the responsible use of AI.Credit: Howie Liu

To help bridge this communications gap, Chang founded the AI x Education conference in 2023 to bring together secondary and university students and educators to have candid discussions about the future of AI in learning. The virtual conference included 60 speakers and more than 5,000 registrants. This is one of several efforts set up and led by students to ensure that they have a part in determining what responsible AI will look like at universities.

Over the past year, at events in the United States, India and Thailand, students have spoken up to share their perspectives on the future of AI tools in education. Although many students see benefits, they also worry about how AI could damage higher education.

Enhancing education

Leo Wu, an undergraduate student studying economics at Minerva University in San Francisco, California, co-founded a student group called AI Consensus. Wu and his colleagues brought together students and educators in Hyderabad, India, and in San Francisco for discussion groups and hackathons to collect real-world examples of how AI can assist learning.

From these discussions, students agreed that AI could be used to disrupt the existing learning model to make it more accessible for students with different learning styles or who face language barriers. For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images. Others used AI to transform data points collected in a laboratory class into an intuitive visualization.

For people studying in a second language, Wu says that “the language barrier [can] prevent students from communicating ideas to the fullest”. Using AI to translate these students’ original ideas or rough drafts crafted in their first language into an essay in English could be one solution to this problem, he says. Wu acknowledges that this practice could easily become problematic if students relied on AI to generate ideas, and the AI returned inaccurate translations or wrote the paper altogether.

Jomchai Chongthanakorn and Warisa Kongsantinart, undergraduate students at Mahidol University in Salaya, Thailand, presented their perspectives at the UNESCO Round Table on Generative AI and Education in Asia–Pacific last November. They point out that AI can have a role as a custom tutor to provide instant feedback for students.

“Instant feedback promotes iterative learning by enabling students to recognize and promptly correct errors, improving their comprehension and performance,” wrote Chongthanakorn and Kongsantinart in an e-mail to Nature. “Furthermore, real-time AI algorithms monitor students’ progress, pinpointing areas for development and suggesting pertinent course materials in response.”

Although private tutors could provide the same learning support, some AI tools offer a free alternative, potentially levelling the playing field for students with low incomes.

Jomchai Chongthanakorn speaks at the UNESCO Round Table on Generative AI and Education conference

Jomchai Chongthanakorn gave his thoughts on AI at a UNESCO round table in Bangkok.Credit: UNESCO/Jessy & Thanaporn

Despite the possible benefits, students also express wariness about how using AI could negatively affect their education and research. ChatGPT is notorious for ‘hallucinating’ — producing incorrect information but confidently asserting it as fact. At Carnegie Mellon University in Pittsburgh, Pennsylvania, physicist Rupert Croft led a workshop on responsible AI alongside physics graduate students Patrick Shaw and Yesukhei Jagvaral to discuss the role of AI in the natural sciences.

“In science, we try to come up with things that are testable — and to test things, you need to be able to reproduce them,” Croft says. But, he explains, it’s difficult to know whether things are reproducible with AI because the software operations are often a black box. “If you asked [ChatGPT] something three times, you will get three different answers because there’s an element of randomness.”

And because AI systems are prone to hallucinations and can give answers only on the basis of data they have already seen, truly new information, such as research that has not yet been published, is often beyond their grasp.

Croft agrees that AI can assist researchers, for example, by helping astronomers to find planetary research targets in a vast array of data. But he stresses the need for critical thinking when using the tools. To use AI responsibly, Croft argued in the workshop, researchers must understand the reasoning that led to an AI’s conclusion. To take a tool’s answer simply on its word alone would be irresponsible.

“We’re already working at the edge of what we understand” in scientific enquiry, Shaw says. “Then you’re trying to learn something about this thing that we barely understand using a tool we barely understand.”

These lessons also apply to undergraduate science education, but Shaw says that he’s yet to see AI play a large part in the courses he teaches. At the end of the day, he says, AI tools such as ChatGPT “are language models — they’re really pretty terrible at quantitative reasoning”.

Shaw says it’s obvious when students have used an AI on their physics problems, because they are more likely to have either incorrect solutions or inconsistent logic throughout. But as AI tools improve, those tells could become harder to detect.

Chongthanakorn and Kongsantinart say that one of the biggest lessons they took away from the UNESCO round table was that AI is a “double-edged sword”. Although it might help with some aspects of learning, they say, students should be wary of over-reliance on the technology, which could reduce human interaction and opportunities for learning and growth.

“In our opinion, AI has a lot of potential to help students learn, and can improve the student learning curve,” Chongthanakorn and Kongsantinart wrote in their e-mail. But “this technology should be used only to assist instructors or as a secondary tool”, and not as the main method of teaching, they say.

Equal access

Tamara Paris is a master’s student at McGill University in Montreal, Canada, studying ethics in AI and robotics. She says that students should also carefully consider the privacy issues and inequities created by AI tools.

Some academics avoid using certain AI systems owing to privacy concerns about whether AI companies will misuse or sell user data, she says. Paris notes that widespread use of AI could create “unjust disparities” between students if knowledge or access to these tools isn’t equal.

Portrait of Tamara Paris

Tamara Paris says not all students have equal access to AI tools.Credit: McCall Macbain Scholarship at McGill

“Some students are very aware that AIs exist, and others are not,” Paris says. “Some students can afford to pay for subscriptions to AIs, and others cannot.”

One way to address these concerns, says Chang, is to teach students and educators about the flaws of AI and its responsible use as early as possible. “Students are already accessing these tools through [integrated apps] like Snapchat” at school, Chang says.

In addition to learning about hallucinations and inaccuracies, students should also be taught how AI can perpetuate the biases already found in our society, such as discriminating against people from under-represented groups, Chang says. These issues are exacerbated by the black-box nature of AI — often, even the engineers who built these tools don’t know exactly how an AI makes its decisions.

Beyond AI literacy, Lee says that proactive, clear guidelines for AI use will be key. At some universities, academics are carving out these boundaries themselves, with some banning the use of AI tools for certain classes and others asking students to engage with AI for assignments. Scientific journals are also implementing guidelines for AI use when writing papers and peer reviews that range from outright bans to emphasizing transparent use.

Lee says that instructors should clearly communicate to students when AI can and cannot be used for assignments and, importantly, signal the reasons behind those decisions. “We also need students to uphold honesty and disclosure — for some assignments, I am completely fine with students using AI support, but I expect them to disclose it and be clear how it was used.”

For instance, Lee says he’s OK with students using AI in courses such as digital fabrication — AI-generated images are used for laser-cutting assignments — or in learning-theory courses that explore AI’s risks and benefits.

For now, the application of AI in education is a constantly moving target, and the best practices for its use will be as varied and nuanced as the subjects it is applied to. The inclusion of student voices will be crucial to help those in higher education work out where those boundaries should be and to ensure the equitable and beneficial use of AI tools. After all, they aren’t going away.

“It is impossible to completely ban the use of AIs in the academic environment,” Paris says. “Rather than prohibiting them, it is more important to rethink courses around AIs.”

[ad_2]

Source Article Link

Categories
Featured

Firewalla unveils the world’s most affordable 10-gigabit smart firewall — ready for next-gen Wi-Fi 7 and high-speed fiber networks, but a price increase is expected soon

[ad_1]

Firewalla makes configurable hardware firewalls that connect to your router, providing protection for your home or business against various network and internet threats.

The company has announced the pre-sale of Firewalla Gold Pro, the newest and most powerful addition to the “Gold” product line. Touted as the world’s most affordable 10-gigabit smart firewall, this device is designed to be compatible with the next-generation Wi-Fi 7 and high-speed 5 and 10-gigabit ISP fiber networks.

[ad_2]

Source Article Link

Categories
Featured

DJI Avata 2 drone gets likely launch date with official ‘ready to roll’ teaser

[ad_1]


  • DJI has released a teaser for a launch event on April 11
  • The teaser shows an FPV drone that looks a lot like the leaker DJI Avata 2
  • The Avata 2 is expected to be launched alongside a new DJI Goggles 3 headset

Just a week after a wave of leaks revealed hands-on videos and retail packaging for a new DJI Avata 2 drone, the drone giant has all but confirmed that the FPV (first-person view) flying machine will be launching on April 11.

A new ‘Ready to Roll’ teaser (below) posted on DJI’s social media and website shows that it’ll be launching a new drone on April 11 at 9am EDT / 2pm BST (or midnight AEST on April 12).



[ad_2]

Source Article Link

Categories
Entertainment

How to get your grill ready for the outdoor season

[ad_1]

As temperatures begin to rise, it’s time to prepare your outdoor space for some seasonal relaxation. That, of course, includes showing off your culinary skills on the porch, patio or in the backyard for guests. During the winter, your grill has probably been hibernating, so you’ll need to give it a tune-up before it’s ready for heavy use from spring to fall. Even if you kept the grill going in the cold, now is a great time for a thorough cleaning before the official outdoor cooking season begins. Here are a few tips and tricks that will hopefully make things easier.

Disassemble, scrub, reassemble

Weber's first pellet grill has potential to be a backyard powerhouse, but the smart features need work.Weber's first pellet grill has potential to be a backyard powerhouse, but the smart features need work.

Billy Steele/Engadget

A good rule of thumb when it comes to cleaning anything you haven’t used in a while is to take it apart as much as you feel comfortable and give it a thorough wipe down. For grills, this means removing the grates and any bars or burner covers – basically, anything you can take out that’s not the heating element. This gives you a chance to inspect the burners of your gas grill or the fire pot of a pellet model for any unsightly wear and tear. If those components are worn out or overly rusted, most companies offer replacements that you can easily swap out with a few basic tools.

Once all the pieces are out, start by scraping excess debris off all sides of the interior – with the help of some cleaner if needed. For a gas grill, this likely means pushing everything out through the grease trap. On a pellet grill, you’ll want to scrape the grease chute clear and out into the catch can, but you’ll also need to vacuum the interior with a shop vac – just like you would after every few hours of use. And while you’re at it, go ahead and empty the hopper of any old pellets that have been sitting since Labor Day. Fuel that’s been sitting in the grill for months won’t give you the best results when it comes time to cook so you might as well start fresh.

Thankfully, pellet grill companies have made easy cleaning a key part of their designs. Weber’s SmokeFire has a set of metal bars on the inside that can be removed quickly to open up the bottom of the chamber. This is also a design feature of the company’s gas grills. Simply vacuum or push the debris out the grease chute. The catch pan where all of the garbage ends up is also easy to access from the front of the grill, and you can remove the aluminum liner and replace it with a new one in seconds.

Traeger’s most recent pellet grills were also redesigned to improve cleaning. Most notably, grease and ash end up in the same “keg” that’s easy to detach from the front of the grill. The company also allows you to quickly remove all of the interior components, though they’re larger than what you find on the SmokeFire. Lastly, Traeger moved the pellet chute to the front of the new Timberline and Ironwood, making it a lot more convenient to swap out wood varieties or empty an old supply.

You’ll want to get as much of the food leftovers out of your grill as possible for a few reasons. First, that stuff is old and lots of build-up over time can hinder cooking performance and might impact flavor. The last thing you want is old food or grease burning off right under an expensive ribeye. Second, in the case of pellet grills, not properly clearing out grease and dust can be dangerous. It’s easy for grease fires to start at searing temperatures and if there’s enough pellet dust in the bottom of your grill, it can actually ignite or explode. That’s why companies tell you to vacuum it out after every few hours of use.

Weber's first pellet grill has potential to be a backyard powerhouse, but the smart features need work.Weber's first pellet grill has potential to be a backyard powerhouse, but the smart features need work.

All of that dust, grease and debris should be removed before you fire the grill back up. (Billy Steele/Engadget)

To actually clean the surfaces, you’ll want to get an all-natural grill cleaner. There are tons of options here, and it may take some time to find one you like. I typically use Traeger’s formula since it’s readily available at the places I buy pellets and I’ve found it works well cutting through stuck-on muck. You want an all-natural grill cleaner over a regular household product as it’s safe to use on surfaces that will touch your food. They’re also safe to use on the exterior of your grill without doing any damage to chrome, stainless steel or any other materials.

Spray down the inside and give things a few minutes to work. Wipe it all clean and go back over any super dirty spots as needed. Ditto for the grates, bars and any other pieces you removed. I like to lay these out on a yard waste trash bag (they’re bigger than kitchen bags) so all the stuff I scrape or clean off doesn’t get all over my deck. You can use shop towels if you want to recycle or paper towels if not, but just know whatever you choose will be covered in nasty black grime so you won’t want to just toss them in the clothes washer when you’re done. A pre-wash in a bucket or sink is needed to make sure you don’t transfer gunk from your grill to your business casuals.

In terms of tools, you don’t need much. I’ve tried that grill robot that claims to do the job for you, but I’ve found sticking to the basics is more efficient. And honestly, when you get the hang of it, it doesn’t take all that long. It’s a good idea to have a wire brush specifically for the grates that you don’t use to clean anything else. After all, this will be touching the same surfaces you put food on. I recommend another, smaller wire brush – the ones that look like big toothbrushes – for cleaning the burners on a gas grill. If you notice the flame isn’t firing through one of the holes, you can use this to clean the pathway. Lastly, plastic is the way to go for a scraper, anything else and you risk scratching the surfaces of your grill. Sure, any damage done would be on the inside, but it’s still not a great feeling to knick up your previous investment.

Check for updates before your first cook

Traeger WiFire appTraeger WiFire app

Traeger

If you have a smart grill from the likes of Traeger, Weber or another company, you’ll want to plug it in and check for software updates well in advance of your first grilling session. Chances are you haven’t cooked much since last fall, which means companies have had months to push updates to their devices. Trust me, there’s nothing worse than spending an hour trimming and seasoning a brisket only to walk outside to start the grill and it immediately launches into the update process. This could extend the whole cooking time significantly depending on the extent of the firmware additions and strength of your WiFi.

Thankfully, checking for updates is quick and easy. All you need to do is turn on your grill and open up the company’s app on your phone. If there’s a download ready for your model, the mobile software will let you know and it’s usually quite prominent. If there’s not a pop-up alert that displays immediately, you can check the settings menu just to make sure. Sometimes for smaller updates, a company might not beat you over the head to refresh. However, starting a fresh slate of firmware is always a safe bet and will ensure your grill is running at its best when it comes time to cook.

For a good time every time, clean after each use

Traeger Ironwood 650Traeger Ironwood 650

Billy Steele/Engadget

I’ll be the first to admit I don’t adhere to my own advice here, but it’s nice to have goals. I will also be the first to tell you every single time I smoke a Boston Butt or some other super fatty cut of meat that I wish I would’ve done at least a quick cleaning right after the meal. Grease buildup is not only highly flammable but it’s much harder to clean once it cools and solidifies. Ditto for stuck-on sauce or cheese that’s left on your grates after chicken or burgers. It’s best to attack these things while the grill is still warm, but cooled down from the cook.

You don’t necessarily have to break out the shop vac each time for your pellet grill or empty the grease bin. But you’ll want to make sure that stuff is away from the main cooking area for safety and so any burn off won’t impact the flavor of your food. A few cups of hot water can cleanse the grease run-off while that wire brush I mentioned is best for the grates. It also doesn’t hurt to do a light wipe down with an all-natural cleaner so everything is ready to go when you want to cook again.

New grills coming soon

A number of grill companies have already announced their 2024 product lineups. If you’re looking for new gear for the summer, some of them are already available while others will be arriving over the next few weeks. Recteq announced a robust group of grills in October, all of which are Wi-Fi-connected pellet models. The company also updated its family of “regular” pellet grills while taking the wraps off the SmokeStone 600 griddle and the dual-chamber DualFire 1000.

Weber has also tipped its hand for 2024. Back at CES, the company revealed a redesigned pellet grill, the Searwood, that will replace the SmokeFire in North America. Part of Searwood’s feature set is a special mode that allows you to use the grill while the lid is open for things like searing and flat-top griddling. Weber also debuted a new gas griddle, the Slate, that has a specially designed cooking surface that the company promises won’t rust and a digital temperature gauge. What’s more, there’s a new premium Summit smart gas grill with a massive touchscreen color display and top-mounted infrared broiler. Smart features here help with everything from gas flow to individual burners to monitoring fuel supply and dialing in the cooking process. All of the new Weber grills are scheduled to arrive this spring.

We haven’t heard much from Traeger this year and there’s a good chance the company won’t have new grills in 2024. It overhauled the Timberline in 2022 and brought some of the latest features, including the touchscreen display, to the Ironwood in 2023. Never say never, but if you’re looking for another all-new Traeger grill, you might be waiting several months.

[ad_2]

Source Article Link

Categories
Life Style

Is AI ready to mass-produce lay summaries of research articles?

[ad_1]

AI chatbot use showing a tablet screen with language bubbles on top of it.

Generative AI might be a powerful tool in making research more accessible for scientists and the broader public alike.Credit: Getty

Thinking back to the early days of her PhD programme, Esther Osarfo-Mensah recalls struggling to keep up with the literature. “Sometimes, the wording or the way the information is presented actually makes it quite a task to get through a paper,” says the biophysicist at University College London. Lay summaries could be a time-saving solution. Short synopses of research articles written in plain language could help readers to decide which papers to focus on -— but they aren’t common in scientific publishing. Now, the buzz around artificial intelligence (AI) has pushed software engineers to develop platforms that can mass produce these synopses.

Scientists are drawn to AI tools because they excel at crafting text in accessible language, and they might even produce clearer lay summaries than those written by people. A study1 released last year looked at lay summaries published in one journal and found that those created by people were less readable than were the original abstracts -— potentially because some researchers struggle to replace jargon with plain language or to decide which facts to include when condensing the information into a few lines.

AI lay-summary platforms come in a variety of forms (see ‘AI lay-summary tools’). Some allow researchers to import a paper and generate a summary; others are built into web servers, such as the bioRxiv preprint database.

AI lay-summary tools

Several AI resources have been developed to help readers glean information about research articles quickly. They offer different perks. Here are a few examples and how they work:

– SciSummary: This tool parses the sections of a paper to extract the key points and then runs those through the general-purpose large language model GPT-3.5 to transform them into a short summary written in plain language. Max Heckel, the tool’s founder, says it incorporates multimedia into the summary, too: “If it determines that a particular section of the summary is relevant to a figure or table, it will actually show that table or figure in line.”

– Scholarcy: This technology takes a different approach. Its founder, Phil Gooch, based in London, says the tool was trained on 25,000 papers to identify sentences containing verb phrases such as “has been shown to” that often carry key information about the study. It then uses a mixture of custom and open-source large language models to paraphrase those sentences in plain text. “You can actually create ten different types of summaries,” he adds, including one that lays out how the paper is related to previous publications.

– SciSpace: This tool was trained on a repository of more than 280 million data sets, including papers that people had manually annotated, to extract key information from articles. It uses a mixture of proprietary fine-tuned models and GPT-3.5 to craft the summary, says the company’s chief executive, Saikiran Chandha, based in San Francisco, California. “A user can ask questions on top of these summaries to further dig into the paper,” he notes, adding that the company plans to develop audio summaries that people can tune into on the go.

Benefits and drawbacks

Mass-produced lay summaries could yield a trove of benefits. Beyond helping scientists to speed-read the literature, the synopses can be disseminated to people with different levels of expertise, including members of the public. Osarfo-Mensah adds that AI summaries might also aid people who struggle with English. “Some people hide behind jargon because they don’t necessarily feel comfortable trying to explain it,” she says, but AI could help them to rework technical phrases. Max Heckel is the founder of SciSummary, a company in Columbus, Ohio, that offers a tool that allows users to import a paper to be summarized. The tool can also translate summaries into other languages, and is gaining popularity in Indonesia and Turkey, he says, arguing that it could topple language barriers and make science more accessible.

Despite these strides, some scientists feel that improvements are needed before we can rely on AI to describe studies accurately.

Will Ratcliff, an evolutionary biologist at the Georgia Institute of Technology in Atlanta, argues that no tool can produce better text than can professional writers. Although researchers have different writing abilities, he invariably prefers reading scientific material produced by study authors over those generated by AI. “I like to see what the authors wrote. They put craft into it, and I find their abstract to be more informative,” he says.

Nana Mensah, a PhD student in computational biology at the Francis Crick Institute in London, adds that, unlike AI, people tend to craft a narrative when writing lay summaries, helping readers to understand the motivations behind each step of the study. He says, however, that one advantage of AI platforms is that they can write summaries at different reading levels, potentially broadening the audience. In his experience, however, these synopses might still include jargon that can confuse readers without specialist knowledge.

AI tools might even struggle to turn technical language into lay versions at all. Osarfo-Mensah works in biophysics, a field with many intricate parameters and equations. She found that an AI summary of one of her research articles excluded information from a whole section. If researchers were looking for a paper with those details and consulted the AI summary, they might abandon her paper and look for other work.

Andy Shepherd, scientific director at global technology company Envision Pharma Group in Horsham, UK, has in his spare time compared the performances of several AI tools to see how often they introduce blunders. He used eight text generators, including general ones and some that had been optimized to produce lay summaries. He then asked people with different backgrounds, such as health-care professionals and the public, to assess how clear, readable and useful lay summaries were for two papers.

“All of the platforms produced something that was coherent and read like a reasonable study, but a few of them introduced errors, and two of them actively reversed the conclusion of the paper,” he says. It’s easy for AI tools to make this mistake by, for instance, omitting the word ‘not’ in a sentence, he explains. Ratcliff cautions that AI summaries should be viewed as a tool’s “best guess” of what a paper is about, stressing that it can’t check facts.

Broader readership

The risk of AI summaries introducing errors is one concern among many. Another is that one benefit of such summaries — that they can help to share research more widely among the public — could also have drawbacks. The AI summaries posted alongside bioRxiv preprints, research articles that have yet to undergo peer review, are tailored to different levels of reader expertise, including that of the public. Osarfo-Mensah supports the effort to widen the reach of these works. “The public should feel more involved in science and feel like they have a stake in it, because at the end of the day, science isn’t done in a vacuum,” she says.

But others point out that this comes with the risk of making unreviewed and inaccurate research more accessible. Mensah says that academics “will be able to treat the article with the sort of caution that’s required”, but he isn’t sure that members of the public will always understand when a summary refers to unreviewed work. Lay summaries of preprints should come with a “hazard warning” informing the reader upfront that the material has yet to be reviewed, says Shepherd.

“We agree entirely that preprints must be understood as not peer-reviewed when posted,” says John Inglis, co-founder of bioRxiv, who is based at Cold Spring Harbor Laboratory in New York. He notes that such a disclaimer can be found on the homepage of each preprint, and if a member of the public navigates to a preprint through a web search, they are first directed to the homepage displaying this disclaimer before they can access the summary. But the warning labels are not integrated into the summaries, so there is a risk that these could be shared on social media without the disclaimer. Inglis says bioRxiv is working with its partner ScienceCast, whose technology produces the synopses, on adding a note to each summary to negate this risk.

As is the case for many other nascent generative-AI technologies, humans are still working out the messaging that might be needed to ensure users are given adequate context. But if AI lay-summary tools can successfully mitigate these and other challenges, they might become a staple of scientific publishing.

[ad_2]

Source Article Link

Categories
Bisnis Industri

Apple Sports iPhone app update gets it ready for March Madness

[ad_1]

New Apple Sports app
The new Apple Sports app for iPhone got an update for basketball and baseball fans.
Image: Apple/Cult of Mac

The new Apple Sports iPhone application just recieved its first big update. It’s now ready for basketball’s March Madness and baseball’s Opening Day.

The free app that debuted in February gives fans access to real-time scores, stats and more for their favorite teams across a wide range of sports leagues.

Apple Sports 1.1 update focuses on basketball and baseball

“We created Apple Sports to give sports fans what they want — an app that delivers incredibly fast access to scores and stats,” said Eddy Cue, Apple’s SVP of services, in a statement.

An update released Thursday morning by Apple promises:

  • Ready for March Madness? Follow the Men’s and Women’s NCAA Basketball Tournaments for real-time updates.
  • Starting with Opening Day, go deep this MLB season with play-by-play updates, betting odds, box scores, and more for all of your favorite teams.
  • Final scores are now sorted by league.

More about the app

Users of the recently released Apple Sports application can follow their favorite teams, tournaments and leagues. Fans also can get play-by-play information, team stats, lineup details and live betting odds.

Currently, fans can follow teams and leagues that are now in season. Others, including the NFL,  will be added as their seasons get closer.

The updated software is available now in the App Store. It’s free to install and use.

Be sure to read Cult of Mac‘s guide on how to download, set up and use the new Apple Sports app for iPhone.



[ad_2]

Source Article Link

Categories
News

Apple’s iOS 17.4 is Almost Ready for Release (Video)

iOS 17.4

Apple released iOS 17.4 beta 4 last week and they are expected to release the Release Candidate of the software this week, it looks like the latest beta is close to the final version. A new video from iDeviceHelp gives us more details on the new iOS 17.4 software update for the iPhone.

As we edge closer to the official release, let’s delve into what the latest beta version holds for users, based on a detailed video update shared by a seasoned user. This exploration will guide you through the enhanced features, performance improvements, and what you can expect in terms of release timing.

The journey of iOS 17.4 is nearing its conclusion, with anticipations swirling around a Release Candidate (RC) version soon. This signals a polished and nearly complete software version ready for broader testing.

Users longing for extended battery life will be pleased to know significant strides have been made in this domain. Each beta iteration has brought forth improvements, hinting at a potential solution to the notorious battery drain issue.

If you’ve faced challenges with Wi-Fi and cellular connections, iOS 17.4 Beta 4 brings good news. Enhanced Wi-Fi stability and robust cellular service are among the highlights, indicating a smoother, more reliable experience.

A refreshed experience awaits Apple Music aficionados. A novel splash screen now greets users with monthly replays, making it easier to dive back into your favorite tunes. Moreover, navigating the app has been streamlined, thanks to the introduction of a “Home” button, replacing the “Listen Now” tab.

While the exact release date remains under wraps, speculations suggest the iOS 17.4 RC could land in the hands of developers and public beta testers by the end of February. The official public rollout is expected to follow shortly, possibly in the first full week of March. These projections, based on past patterns and current observations, paint a promising picture for eager users.
Source & Image Credit: iDeviceHelp

Filed Under: Apple, Apple iPhone





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

The world is not ready for ChatGPT-5 says OpenAI

The world is not ready for ChatGPT-5 says OpenAI employee

OpenAI, a leading artificial intelligence research lab responsible for creating the ChatGPT AI models, has sparked concern about society’s readiness for advanced AI systems, such as the hypothetical ChatGPT-5. The statement, made by an OpenAI employee, suggests that the organization is intentionally not sharing certain AI technologies widely, hoping to prompt a social response to prepare for more advanced systems.

Over the last 18 months artificial intelligence (AI) has been pushing the boundaries of what computers can do. OpenAI the company responsible for the explosion in AI applications thanks to its release of ChatGPT 3  last year, has recently taken a step that has sparked a lively debate about how society should handle the rapid advancement of AI technologies. They’ve suggested slowing down the release of certain AI capabilities, such as those that might be found in a future version of its AI models, to allow society to catch up and prepare for what’s coming next.

This cautious approach has led to a split in public opinion. On one side, there’s excitement about the potential of AI to transform our lives, as seen with OpenAI’s latest text-to-video model. On the other, there’s a sense of concern.  For those in the workforce, the advancement of AI raises important questions about job security and the ethical implications of machines taking over roles traditionally held by humans. The emergence of AI-generated content also poses moral dilemmas that society must grapple with. These concerns are not just hypothetical; they are pressing issues that need to be addressed as AI continues to evolve.

Open AI ChatGPT-5 are we ready?

There’s much speculation about the progress OpenAI has made behind closed doors. Given their track record of steady improvements, some believe the organization might be on the cusp of a major breakthrough, possibly in the realm of Artificial General Intelligence (AGI). However, OpenAI’s communications have been somewhat ambiguous, leaving room for interpretation and fueling further speculation about their capabilities.

Here are some other articles you may find of interest on the subject of OpenAI’s next generation ChatGPT-5 AI model :

The rapid development of AI technologies and the impending launch of ChatGPT-5 has led some experts to suggest that a gradual release of updated AI models might be the best way to integrate these systems into society. This approach could mitigate the shock of introducing completely new and advanced models all at once. Despite the potential benefits of AI, public sentiment is often skeptical or outright negative. This is evident in the backlash against autonomous vehicles on social media, as well as the calls for stringent regulations or even outright bans on AI.

The societal impact of AI is a complex issue that extends beyond technology. There are fears that AI could worsen social unrest and increase inequality. These concerns have prompted calls for policymakers to take proactive steps to ensure that the benefits of AI are distributed fairly and equitably.

Another pressing concern is the possibility of an “AI race to the bottom.” This scenario envisions companies competing to release powerful AI models without fully considering whether society is ready for them. Such a rush could lead to AI systems that outpace the ethical and regulatory frameworks needed to manage them safely and responsibly.

Concerns of releasing GPT-5

Social Impact and Job Displacement

The integration of advanced AI technologies into various sectors can lead to increased efficiency and cost savings for businesses but also poses a significant risk of job displacement. As AI systems like GPT-5 become capable of performing tasks that range from customer service to content creation and even technical analysis, the roles traditionally filled by humans in these areas may be reduced or eliminated. This shift can lead to widespread economic and social repercussions, including increased unemployment rates, reduced consumer spending, and heightened economic inequality. The social fabric could be strained as communities dependent on industries most affected by AI face economic downturns, potentially leading to societal unrest and challenges in workforce reintegration.

Ethical and Moral Implications

AI’s ability to generate realistic content poses significant ethical challenges. For instance, the creation of deepfakes can undermine the authenticity of information, making it difficult to distinguish between real and artificial content. This capability could be misused to spread misinformation, manipulate elections, or conduct fraudulent activities, posing threats to democracy, privacy, and public trust. The ethical dilemma extends to the responsibility of developers and platforms in preventing misuse while ensuring freedom of expression and innovation.

Safety and Control

As AI systems grow more complex, ensuring their alignment with human values and ethical principles becomes increasingly difficult. There’s a risk that AI could develop harmful or unintended behaviors not anticipated by its creators, potentially due to the complexity of their decision-making processes or emergent properties of their learning algorithms. This raises concerns about the safety measures in place to prevent such outcomes and the ability to control or correct AI systems once they are operational.

Technological Advancement vs. Societal Readiness

The pace at which AI technologies are advancing may surpass society’s capacity to comprehend, regulate, and integrate these systems effectively. This gap can lead to disruptions, as existing legal, ethical, and regulatory frameworks may be inadequate to address the challenges posed by advanced AI. The rapid introduction of AI technologies could result in societal challenges, including privacy violations, ethical dilemmas, and the need for new laws and regulations, which may be difficult to develop and implement in a timely manner.

Transparency and Accountability

The development of AI systems like GPT-5 involves decisions that can have broad implications. Concerns arise regarding the transparency of the processes used to train these models, the sources of data, and the criteria for decision-making. The accountability of organizations developing AI technologies is crucial, especially in instances where AI’s actions lead to harm or bias. Ensuring that these systems are developed and deployed transparently, with clear lines of accountability, is essential to maintain public trust and ensure ethical use.

Race to the Bottom

The competitive nature of the AI industry might drive companies to prioritize technological advancement over safety, ethics, and societal impact, leading to a “race to the bottom.” This competition can result in the release of powerful AI technologies without sufficient safeguards, oversight, or consideration of long-term impacts, increasing the risk of negative outcomes. The pressure to stay ahead can compromise the commitment to ethical standards and responsible innovation.

AI and Inequality

Advanced AI technologies have the potential to significantly benefit those who have access to them, potentially exacerbating existing inequalities. The “digital divide” between individuals and countries with access to advanced AI and those without could widen, leading to disparities in economic opportunities, education, healthcare, and more. This division not only affects individual and national prosperity but also raises ethical questions about the equitable distribution of AI’s benefits and the global management of its impacts.

Addressing these concerns requires a coordinated effort from governments, industry, academia, and civil society to develop comprehensive strategies that include ethical guidelines, robust regulatory frameworks, public engagement, and international cooperation. Ensuring that AI development is guided by principles of equity, safety, and transparency is crucial to harnessing its potential benefits while mitigating risks and negative impacts.

The recent statements by OpenAI have brought to light the multifaceted challenges posed by the rapid evolution of AI. As we navigate these challenges, it’s clear that a balanced approach to AI development and integration is crucial. This strategy must take into account not only the technological advancements but also the ethical, social, and regulatory aspects that are essential for AI to coexist harmoniously with humanity.

The debate around OpenAI’s cautionary stance on AI development is a microcosm of the larger conversation about how we, as a society, should approach the integration of these powerful technologies into our daily lives. It’s a conversation that requires the input of not just technologists and policymakers but also of the broader public, whose lives will be increasingly influenced by the decisions made today. As AI continues to advance, finding the right balance between innovation and responsibility will be key to ensuring that the future of AI aligns with the best interests of humanity. What are your thoughts on AI and the release of ChatGPT-5 and Artificial General Intelligence?

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.