If you’re a fan of historical dramas featuring quick wits and absolute… bad people, Prime Video‘s My Lady Jane looks like it should be at the top of your watch list. Inspired by the best-selling book of the same name, the new series is set in an alternative version of the Tudor world where Lady Jane Grey didn’t lose her head after just nine days on the throne.
It turns out that not being beheaded is a pretty good career move, but after Lady Jane finds herself crowned Queen of England, a move that is not universally popular, it doesn’t take long before ne’er-do-wells, ruffians and other scoundrels come for her crown and her head.
According to Prime Video, the series is an epic tale of true love and high adventure, where the damsel in distress saves herself, her true love and then the Kingdom. The streamer has said that the show will be released in late June, which sets up a tasty royal clash with Netflix’s own period drama Bridgerton. The second part of the hit show’s season three is set to launch on June 13.
Heads you (don’t) lose
Unusually for a prestige drama, the lead actor here is a newcomer: Emily Bader, who plays the titular Lady Jane. Her rascally husband Guildford is played by Edward Bluemel of Killing Eve and Jordan Peters from Pirates plays the scheming King Edward. The cast also includes the always watchable Dominic Cooper from Preacher as Lord Seymour, Anna Chancellor (Pennyworth) as Jane’s mother, and Rob Brydon (The Trip) as Guildford’s father.
With a behind the camera team including showrunner and creator Gemma Burgess (Brooklyn Girls), co-showrunner Meredith Glynn (The Boys) and producing director Jamie Babbitt (Only Murders In The Building), this looks likely to be a particularly irreverent take on the genre. In this telling, the damsel in distress saves herself.
My Lady Jane will be streaming from Thursday, June 27.
You might also like
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The best OLED TVs are about to get a whole lot better. A new panel technology known as eLEAP will officially go into production later this year, according to FlatpanelsHD. Although it won’t be going into any big-name TVs at first, the new screen technology promises to deliver brightness in excess of 3,000 nits and improved durability, which means that it could make screens last longer, helping to cut down on e-waste.
eLeap was developed by Japan Display (JDI), which is a firm that was created by the merger of the display businesses of Sony, Toshiba and Hitachi. And while we first started reporting on it in 2022, it’s only just starting to ramp up production with plans to expand this to the mainstream market in late 2024.
Although no consumer brands have yet announced plans to use the new tech, the panels are likely to appear in laptops first, with one of the first panels being a 14-inch OLED for portable computers. That’ll deliver peak brightness of 1,600 nits, but even brighter panels are imminent.
What is eLeap OLED?
eLeap – it’s an extremely tenuous acronym for “environment positive lithography with maskless deposition, extreme long life, low power and high luminance” – uses light to transfer patterns in the manufacturing of integrated circuits, a process that can deliver increased brightness and increased durability too, which is great news for cutting down on e-waste.
This is the first OLED technology to use such a process, and according to Japan Display the production process is currently six months ahead. In the eight months before launch, JDI says it’s already achieving production yields of 60%. The higher the yield the more efficient the production and the lower the cost.
According to JDI, it will supply eLeap panels “for use in a wide array of end-use applications, including smartwatches and wearables, smartphones, notebook PCs, and automotive displays”. TVs are currently conspicuous by their absence, however. That’s because the manufacturing capacity isn’t there yet to produce larger panels: JDI’s plant for that is not expected to be online until 2027.
You might also like
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
It’s a big day for Quest users. Meta has announced it’s giving third-party companies open access to its headsets’ operating system to expand the technology. The tech giant wants developers to take the OS, expand into other frontiers, and accomplish two main goals: give consumers more choice in the virtual reality gaming market and give developers a chance to reach a wider audience.
Among this first batch of partners, some are already working on a Quest device. First off, ASUS’ ROG (Republic of Gamers) is said to be developing “an all-new performance gaming headset.” Lenovo’s on the list too and they’re seemingly working on three individual models: one for productivity, one for education, and one for entertainment.
This past December, Xbox Cloud Gaming landed on Quest headsets as a beta bringing a wave of new games to the hardware. Microsoft is teaming up with Meta again “to create a limited-edition Meta Quest [headset], inspired by Xbox.”
New philosophy
Meta is also making several name changes befitting their tech’s transformation.
The operating system will now be known as Horizon OS. The company’s Meta Quest Store will be renamed the Horizon Store, and the mobile app will eventually be rebranded as the Horizon app. To aid with the transition, third-party devs are set to receive a spatial app framework to bring their software over to Horizon OS or help them create a new product.
With Horizon at the core of this ecosystem, Meta aims to introduce social features that dev teams “can integrate… into their [software]”. They aim to bridge multiple platforms together creating a network existing “across mixed reality, mobile, and desktop devices.” Users will be able to move their avatars, friend groups, and more onto other “virtual spaces”.
This design philosophy was echoed by Meta CEO Mark Zuckerberg. In a recent Instagram video, Zuckerberg stated he wants Horizon OS to be an open playground where developers can come in and freely create software rather than a walled garden similar to iOS.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Breaking down barriers
It’ll be a while until we see any of these headsets launch. Zuckerberg said in his post that “it’s probably going to take a couple of years for these” products to launch. At the moment, Meta is “removing the barriers” between its App Lab and digital storefront allowing devs to publish software on the platform as long as they meet “basic technical and content” guidelines. It’s unknown if there’ll be any more limitations apart from requiring third-party companies to use Snapdragon processors.
No word if other tech brands will join in. Zuckerberg says he hopes to see the Horizon Store offer lots of software options from Steam, Xbox Cloud Gaming, and even apps from the Google Play Store – “if they’re up for it.” It seems Google isn’t on board with Horizon OS yet.
Rumors have been circulating these past several months claiming Google and Samsung are working together on an XR/VR headset. Perhaps the two are ignoring Meta’s calls to focus on their “so-called Apple Vision Pro rival”.
With the advent of increased third-party support on iOS, video game emulators have rushed to the App Store to fill in the gap. The first bunch has been primarily for old Commodore 64 and GameBoy titles. However, this could soon change as we may see an emulator capable of running Sony PlayStation and Sega Saturn games. The app in question is called Provenance EMU. In an email to news site iMore, project lead Joseph Mattiello said his team is working on launching their software to the App Store.
Provenance, if you’re not familiar, can run titles from a variety of consoles, including famous ones such as the Super Nintendo and more obscure machines. It’s unknown when the emulator will make its debut. Mattiello states they also need to make some quality-of-life fixes first and he wants to “investigate” the new rules. The report doesn’t explain what he’s referring to, but Mattiello may be talking about the recent changes Apple made to the App Review Guidelines. Lines were added in early April stating “developers are responsible for all the software inside their apps”. Plus, emulators need to “comply with all applicable laws”.
Warning
Please note the use of emulators may be in violation of the game developer and publisher terms and conditions as well as applicable intellectual property laws. These will vary so please check these. Emulators should only ever be used with your own purchased game copy. TechRadar does not condone or encourage the illegal downloading of games or actions infringing copyright.
This could put third-party developers under deep scrutiny by gaming publishers. Nintendo, for example, is not afraid to sic its lawyers after developers it claims are violating the law. Look at what happened with Yuzu.
Game emulation currently exists in a legally gray area. Despite this, they have been allowed to exist, but one wrong move could bring the hammer down. So, Mattiello wants to ensure his team won’t be stepping on any landmines at launch. If all goes well, we could see a new era of mobile gaming; one where the titles aren’t just sidescrollers with sprites, but games featuring fleshed-out 3D models and environments.
What to play
We don’t recommend downloading random ROMs of games off the internet. Not only could they violate intellectual property laws, but they can also hold malware. These digital libraries aren’t the most secure.
So if and when Provenance is released on the App Store, what can people play? At the moment, it seems users will have to try out homebrew games. They’re independently made titles that copy certain graphical styles for emulators.
iMore recommends PSX Place, a website where hobbyists come together to share their homebrewed PlayStation games. Itch.io is another great resource. If you ever wanted to play a fan adaptation of Twin Peaks, Itch.io has one available. For GameBoy-style titles, Homebrew Hub has tons of fan-made projects. Personally, we would love to see publishers like Sony and Nintendo release their games on iOS. That way, people can enjoy the classics without skirting the law.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
For those looking to upgrade, check out TechRadar’s guide for the best iPhone for 2024.
The lead developer of the multi-emulator app Provenance has told iMore that his team is working towards releasing the app on the App Store, but he did not provide a timeframe. Provenance is a frontend for many existing emulators, and it would allow iPhone and Apple TV users to emulate games released for a wide variety of classic game consoles, including the original PlayStation, GameCube, Wii, SEGA Genesis, Atari 2600, and others.
Apple has so far approved emulators on the App Store for older Nintendo consoles and the Commodore 64. For example, Riley Testut’s popular Delta emulator is now in the App Store in many countries, and it can emulate games released for the Game Boy Pocket, Game Boy Color, Game Boy Advance, Nintendo Entertainment System (NES), Super Nintendo Entertainment System (SNES), Nintendo 64, and Nintendo DS. Provenance would bring the first Sony, SEGA, and Atari emulators to the App Store if approved.
Provenance has been in development since 2016, and it can already be sideloaded on the iPhone and the Apple TV outside of the App Store.
Apple updated its App Review Guidelines earlier this month to allow “retro game console emulator apps” on the App Store for the iPhone, iPad, Mac, and other devices. Earlier this week, Apple told us that emulators that can load games (ROMs) are permitted on the App Store, so long as the apps are emulating “retro console games” only. It is unclear if Apple will consider consoles like the GameCube and Wii to be “retro.”
While a U.S. court ruled that emulators are legal, downloading copyrighted ROMs is typically against the law in the country. On its customer support website for the U.S., Nintendo says that downloading pirated copies of its games is illegal. A wide collection of public-domain “homebrew” games are available to play legally.
Game emulator apps have come and gone since Apple announced App Store support for them on April 5, but now popular game emulator Delta from developer Riley Testut is available for download. Testut is known as the developer behind GBA4iOS, an open-source emulator that was available for a brief time more than a decade ago. GBA4iOS led to Delta, an emulator that has been available outside of…
The first approved Nintendo Entertainment System (NES) emulator for the iPhone and iPad was made available on the App Store today following Apple’s rule change. The emulator is called Bimmy, and it was developed by Tom Salvo. On the App Store, Bimmy is described as a tool for testing and playing public domain/”homebrew” games created for the NES, but the app allows you to load ROMs for any…
Last September, Apple’s iPhone 15 Pro models debuted with a new customizable Action button, offering faster access to a handful of functions, as well as the ability to assign Shortcuts. Apple is poised to include the feature on all upcoming iPhone 16 models, so we asked iPhone 15 Pro users what their experience has been with the additional button so far. The Action button replaces the switch …
A decade ago, developer Riley Testut released the GBA4iOS emulator for iOS, and since it was against the rules at the time, Apple put a stop to downloads. Emulators have been a violation of the App Store rules for years, but that changed on April 5 when Apple suddenly reversed course and said that it was allowing retro game emulators on the App Store. Subscribe to the MacRumors YouTube channel …
iOS 18 is expected to be the “biggest” update in the iPhone’s history. Below, we recap rumored features and changes for the iPhone. iOS 18 is rumored to include new generative AI features for Siri and many apps, and Apple plans to add RCS support to the Messages app for an improved texting experience between iPhones and Android devices. The update is also expected to introduce a more…
Since Google and Samsung merged Wear OS and Tizen, Google has been rapidly improving Wear OS by adding new and useful features to it, such as introducing new applications for the platform, including Google Calendar and Gmail, improving existing apps, including Google Home and Google Maps, introducing new Tiles, and offering new watch face and health data systems to offer a better user experience. Well, the company is now working on offering another improvement to Wear OS that could come to Galaxy Watches.
According to the Google News channel on Telegram, version 2.3.0 on the Google Pixel Watch app for Android contains an option called ‘Sync permission from phone’ which will “Give your watch the same app permissions that you’ve allowed” on your smartphone. In other words, once you enable the option, the app will sync permissions between your Android smartphone and the Pixel Watch for the common apps on the two devices.
For example, if you give Google Maps on your Android smartphone permission to access your location, with the new option enabled, the Google Pixel Watch app will extend the same permission to Google Maps on your Pixel Watch, saving you from the hassle of giving permissions to the app on your smartphone as well as the smartwatch. The option is located inside the ‘Device details’ menu which is currently hidden behind a flag.
At the moment, there’s no information about when Google will make the new option available to the public or if it will offer this feature to other Wear OS smartwatches, such as Samsung’s Galaxy Watch lineup. We hope that the company extends the new feature to other Wear OS smartwatches as it would offer people more convenience and a better user experience. In the meantime, you can check out the all-new Shazam app for Wear OS.
As anticipation builds for Apple’s upcoming iPhone 16 series, the rumor mill has highlighted some potential camera upgrades that could change how we use our iPhones for photography.
The camera system has always been a cornerstone of Apple’s iPhone, and this year Apple appears set to push the envelope even further. As the iPhone 16 launch in September approaches, all eyes will be on how the following changes might maintain Apple’s competitive edge in an increasingly crowded market. 1. Vertical Camera Layout
iPhone 16 & iPhone 16 Plus
Apple’s iPhone 16 base models will feature a vertical camera arrangement with a pill-shaped raised surface, instead of a diagonal camera arrangement like the iPhone 15. The new camera bump features two separate camera rings for the Wide and Ultrawide cameras. The vertical camera layout is expected to enable Spatial Video recording, which is currently limited to the iPhone 15 Pro models. 2. Ultra Wide Lens Upgrade
iPhone 16 Pro & iPhone 16 Pro Max
The iPhone 16 Pro models are expected to feature an upgraded 48-megapixel Ultra Wide camera lens, which would allow it to capture more light, resulting in improved photos when shooting in 0.5× mode, especially in low-light environments. This also means that iPhone 16 Pro models should be able to shoot 48-megapixel ProRAW photos in Ultra Wide mode. These photos retain more detail in the image file for more editing flexibility, and can be printed at large sizes. 3. Super Telephoto Camera
iPhone 16 Pro Max
The iPhone 16 Pro Max could be the first iPhone to feature a super telephoto periscope camera for dramatically increased optical zoom. “Super” or “ultra” telephoto usually describes cameras with a focal length of over 300mm. The current telephoto lens is equivalent to a 77mm lens, so if accurate, there could be a notable increase in zoom capabilities. Super telephoto cameras are often used for sports and wild animal photography, but the extremely soft backgrounds they create also make them useful for portrait photography, providing there is enough distance between the subject and the photographer. 4. Tetraprism Lens
iPhone 16 Pro
Both iPhone 16 Pro models are expected to feature 5x optical zoom, which is currently exclusive to the iPhone 15 Pro Max. Apple’s tetraprism lens system has a “folded” design that allows it to fit inside the smartphone, enabling up to 5x optical zoom and up to 25x digital zoom. In contrast, the current smaller iPhone 15 Pro is limited to up to 3x optical zoom, which is in line with the iPhone 14 Pro and iPhone 14 Pro Max. 5. Reduced Lens Flare
All iPhone 16 Models
Apple is said to be testing a new anti-reflective optical coating technology for its iPhone cameras that could improve the quality of photos by reducing artifacts like lens flare and ghosting. Apple plans to bring new atomic layer deposition (ALD) equipment into the iPhone camera lens manufacturing process to apply the coating. ALD-applied materials can also protect against environmental damage to the camera lens system without affecting the sensor’s ability to capture light effectively. 6. Capture Button
All iPhone 16 Models
All iPhone 16 models will have a new camera-based “Capture Button” dedicated to quickly triggering image or video capture. The button will add features like the ability to zoom in and out by swiping left and right on the button, focus on a subject with a light press, and activate a recording with a more forceful press. The Capture Button will be located on the bottom right side of the iPhone 16, and will take the place of the mmWave antenna on U.S. iPhone models, with the antenna relocating to the left side of the device below the volume and Action buttons.
It’s been confirmed that Ghost of Tsushima Director’s Cutis to be the first PC port to receive PlayStation trophy support. This version of the game, coming to PC on May 16, is the full package. That means it’ll include the excellent Iki Island expansion and the cooperative multiplayer Legends mode.
The news comes via an official PlayStation Blog post written by Julian Huijbregts, online community specialist at Nixxes Software, the team responsible for the port. The post confirms trophy support alongside multiplayer crossplay for the game’s Legends mode and an entirely new PlayStation overlay for PC players.
Starting with the trophy support, Ghost of Tsushima Director’s Cut on PC will have the same trophies as its console version counterpart. Furthermore, if you’ve connected your PlayStation Network account, any and all trophies you’ve previously unlocked will carry over to the PC version, so there’s no need to grab them twice. And fear not, achievements on Steam and Epic Games Store are supported, too.
It will also be the first PC port to receive an all-new PlayStation overlay, accessible by pressing Shift+F1 on your keyboard. The overlay contains all your PlayStation Network information and stats including trophies, friends list, profile settings and supports voice chat. It seems it’ll be entirely unintrusive too, built into the game rather than requiring a separate downloadable client.
Ghost of Tsushima is the first of presumably many PlayStation PC ports to receive this new overlay. We imagine it’ll come to existing ports like Horizon Forbidden Westand Returnalat some point, alongside any additional ports yet to be announced (we’d still love a Gran Turismo 7port, Sony).
You might also like…
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Google next month will make its latest AI-powered photo editing feature available to all users of Google Photos on iOS, the company has announced.
Magic Editor, which featured heavily in last year’s Google Pixel 8 series marketing blitz, uses generative AI to perform complicated photo edits, such as filling in gaps in a photo, repositioning subjects, and additional foreground/background adjustments like making a cloudy, grey sky look blue.
The edits mimic the kind of possibilities afforded by more professional editing tools like Photoshop, except Magic Editor achieves its automated results via AI, rather than the user having to do them manually.
To editing tool debuted as one of the headline AI features on the company’s flagship phone when it launched six months ago, and has since been exclusive to Google Pixel 8 owners and Google One subscribers. The tool will become available to all users of Google Photos starting May 15.
Google Photos for iOS and Android will include 10 Magic Editor saves per month. To use more than that, users will need to buy a Premium Google One plan, which starts at 2TB of storage for $10 per month or $100 annually.
In addition to Magic Editor, Google is bringing several more editing tools to Google Photos, including Photo Unblur, Sky suggestions, Color pop, HDR effect for photos and videos, Portrait Blur, Portrait Light (plus its add light/balance light features), Cinematic Photos, Styles in the Collage Editor, and Video Effects.
To use the AI features, Apple devices must be running iOS 15 or later. Google Photos is a free download for iPhone and iPad available on the App Store.
Leo Wu, an economics student at Minerva University in San Francisco, California, founded a group to discuss how AI tools can help in education.Credit: AI Consensus
The world had never heard of ChatGPT when Johnny Chang started his undergraduate programme in computer engineering at the University of Illinois Urbana–Champaign in 2018. All that the public knew then about assistive artificial intelligence (AI) was that the technology powered joke-telling smart speakers or the somewhat fitful smartphone assistants.
But, by his final year in 2023, Chang says, it became impossible to walk through campus without catching glimpses of generative AI chatbots lighting up classmates’ screens.
“I was studying for my classes and exams and as I was walking around the library, I noticed that a lot of students were using ChatGPT,” says Chang, who is now a master’s student at Stanford University in California. He studies computer science and AI, and is a student leader in the discussion of AI’s role in education. “They were using it everywhere.”
‘Without these tools, I’d be lost’: how generative AI aids in accessibility
ChatGPT is one example of the large language model (LLM) tools that have exploded in popularity over the past two years. These tools work by taking user inputs in the form of written prompts or questions and generating human-like responses using the Internet as their catalogue of knowledge. As such, generative AI produces new data based on the information it has already seen.
However, these newly generated data — from works of art to university papers — often lack accuracy and creative integrity, ringing alarm bells for educators. Across academia, universities have been quick to place bans on AI tools in classrooms to combat what some fear could be an onslaught of plagiarism and misinformation. But others caution against such knee-jerk reactions.
Victor Lee, who leads Stanford University’s Data Interactions & STEM Teaching and Learning Lab, says that data suggest that levels of cheating in secondary schools did not increase with the roll-out of ChatGPT and other AI tools. He says that part of the problem facing educators is the fast-paced changes brought on by AI. These changes might seem daunting, but they’re not without benefit.
Educators must rethink the model of written assignments “painstakingly produced” by students using “static information”, says Lee. “This means many of our practices in teaching will need to change — but there are so many developments that it is hard to keep track of the state of the art.”
Despite these challenges, Chang and other student leaders think that blanket AI bans are depriving students of a potentially revolutionary educational tool. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” Chang says. For example, rather than asking AI to write their final papers, students might use AI tools to make flashcards based on a video lecture. “There were a lot of discussions happening [on campus], but always without the students.”
Computer-science master’s student Johnny Chang started a conference to bring educators and students together to discuss the responsible use of AI.Credit: Howie Liu
To help bridge this communications gap, Chang founded the AI x Education conference in 2023 to bring together secondary and university students and educators to have candid discussions about the future of AI in learning. The virtual conference included 60 speakers and more than 5,000 registrants. This is one of several efforts set up and led by students to ensure that they have a part in determining what responsible AI will look like at universities.
Over the past year, at events in the United States, India and Thailand, students have spoken up to share their perspectives on the future of AI tools in education. Although many students see benefits, they also worry about how AI could damage higher education.
Enhancing education
Leo Wu, an undergraduate student studying economics at Minerva University in San Francisco, California, co-founded a student group called AI Consensus. Wu and his colleagues brought together students and educators in Hyderabad, India, and in San Francisco for discussion groups and hackathons to collect real-world examples of how AI can assist learning.
From these discussions, students agreed that AI could be used to disrupt the existing learning model to make it more accessible for students with different learning styles or who face language barriers. For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images. Others used AI to transform data points collected in a laboratory class into an intuitive visualization.
Three ways ChatGPT helps me in my academic writing
For people studying in a second language, Wu says that “the language barrier [can] prevent students from communicating ideas to the fullest”. Using AI to translate these students’ original ideas or rough drafts crafted in their first language into an essay in English could be one solution to this problem, he says. Wu acknowledges that this practice could easily become problematic if students relied on AI to generate ideas, and the AI returned inaccurate translations or wrote the paper altogether.
Jomchai Chongthanakorn and Warisa Kongsantinart, undergraduate students at Mahidol University in Salaya, Thailand, presented their perspectives at the UNESCO Round Table on Generative AI and Education in Asia–Pacific last November. They point out that AI can have a role as a custom tutor to provide instant feedback for students.
“Instant feedback promotes iterative learning by enabling students to recognize and promptly correct errors, improving their comprehension and performance,” wrote Chongthanakorn and Kongsantinart in an e-mail to Nature. “Furthermore, real-time AI algorithms monitor students’ progress, pinpointing areas for development and suggesting pertinent course materials in response.”
Although private tutors could provide the same learning support, some AI tools offer a free alternative, potentially levelling the playing field for students with low incomes.
Jomchai Chongthanakorn gave his thoughts on AI at a UNESCO round table in Bangkok.Credit: UNESCO/Jessy & Thanaporn
Despite the possible benefits, students also express wariness about how using AI could negatively affect their education and research. ChatGPT is notorious for ‘hallucinating’ — producing incorrect information but confidently asserting it as fact. At Carnegie Mellon University in Pittsburgh, Pennsylvania, physicist Rupert Croft led a workshop on responsible AI alongside physics graduate students Patrick Shaw and Yesukhei Jagvaral to discuss the role of AI in the natural sciences.
“In science, we try to come up with things that are testable — and to test things, you need to be able to reproduce them,” Croft says. But, he explains, it’s difficult to know whether things are reproducible with AI because the software operations are often a black box. “If you asked [ChatGPT] something three times, you will get three different answers because there’s an element of randomness.”
And because AI systems are prone to hallucinations and can give answers only on the basis of data they have already seen, truly new information, such as research that has not yet been published, is often beyond their grasp.
‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud
Croft agrees that AI can assist researchers, for example, by helping astronomers to find planetary research targets in a vast array of data. But he stresses the need for critical thinking when using the tools. To use AI responsibly, Croft argued in the workshop, researchers must understand the reasoning that led to an AI’s conclusion. To take a tool’s answer simply on its word alone would be irresponsible.
“We’re already working at the edge of what we understand” in scientific enquiry, Shaw says. “Then you’re trying to learn something about this thing that we barely understand using a tool we barely understand.”
These lessons also apply to undergraduate science education, but Shaw says that he’s yet to see AI play a large part in the courses he teaches. At the end of the day, he says, AI tools such as ChatGPT “are language models — they’re really pretty terrible at quantitative reasoning”.
Shaw says it’s obvious when students have used an AI on their physics problems, because they are more likely to have either incorrect solutions or inconsistent logic throughout. But as AI tools improve, those tells could become harder to detect.
Chongthanakorn and Kongsantinart say that one of the biggest lessons they took away from the UNESCO round table was that AI is a “double-edged sword”. Although it might help with some aspects of learning, they say, students should be wary of over-reliance on the technology, which could reduce human interaction and opportunities for learning and growth.
“In our opinion, AI has a lot of potential to help students learn, and can improve the student learning curve,” Chongthanakorn and Kongsantinart wrote in their e-mail. But “this technology should be used only to assist instructors or as a secondary tool”, and not as the main method of teaching, they say.
Equal access
Tamara Paris is a master’s student at McGill University in Montreal, Canada, studying ethics in AI and robotics. She says that students should also carefully consider the privacy issues and inequities created by AI tools.
Some academics avoid using certain AI systems owing to privacy concerns about whether AI companies will misuse or sell user data, she says. Paris notes that widespread use of AI could create “unjust disparities” between students if knowledge or access to these tools isn’t equal.
Tamara Paris says not all students have equal access to AI tools.Credit: McCall Macbain Scholarship at McGill
“Some students are very aware that AIs exist, and others are not,” Paris says. “Some students can afford to pay for subscriptions to AIs, and others cannot.”
One way to address these concerns, says Chang, is to teach students and educators about the flaws of AI and its responsible use as early as possible. “Students are already accessing these tools through [integrated apps] like Snapchat” at school, Chang says.
In addition to learning about hallucinations and inaccuracies, students should also be taught how AI can perpetuate the biases already found in our society, such as discriminating against people from under-represented groups, Chang says. These issues are exacerbated by the black-box nature of AI — often, even the engineers who built these tools don’t know exactly how an AI makes its decisions.
Beyond AI literacy, Lee says that proactive, clear guidelines for AI use will be key. At some universities, academics are carving out these boundaries themselves, with some banning the use of AI tools for certain classes and others asking students to engage with AI for assignments. Scientific journals are also implementing guidelines for AI use when writing papers and peer reviews that range from outright bans to emphasizing transparent use.
Lee says that instructors should clearly communicate to students when AI can and cannot be used for assignments and, importantly, signal the reasons behind those decisions. “We also need students to uphold honesty and disclosure — for some assignments, I am completely fine with students using AI support, but I expect them to disclose it and be clear how it was used.”
For instance, Lee says he’s OK with students using AI in courses such as digital fabrication — AI-generated images are used for laser-cutting assignments — or in learning-theory courses that explore AI’s risks and benefits.
For now, the application of AI in education is a constantly moving target, and the best practices for its use will be as varied and nuanced as the subjects it is applied to. The inclusion of student voices will be crucial to help those in higher education work out where those boundaries should be and to ensure the equitable and beneficial use of AI tools. After all, they aren’t going away.
“It is impossible to completely ban the use of AIs in the academic environment,” Paris says. “Rather than prohibiting them, it is more important to rethink courses around AIs.”