Hay un grupo de en Pero aquí puede resultar un poco inesperado. El controlador de acceso de PS5 tiene que es un precio récord. Es poco más de un tercio del precio de lista de $90.
Sony creó esta consola para hacer que la PlayStation 5 sea más accesible para una gama más amplia de jugadores. La consola de acceso fue lanzada el pasado mes de diciembre.
Estación de juegos
El controlador de accesibilidad de Sony está a la venta este Black Friday. El controlador de acceso se ha reducido a 59 dólares.
El Access Controller viene con 19 tapas de botones y tres tapas de palancas para ayudar a los jugadores a encontrar la configuración que mejor les funcione. Por ejemplo, una cubierta de botones que se adapta a dos enchufes puede resultar más conveniente para alguien que una cubierta estándar. Hay 23 marcas intercambiables en las tapas de los botones para ayudar a los jugadores a identificar qué entrada han asignado a cada botón. Además, hay cuatro puertos auxiliares de 3,5 mm, donde los jugadores pueden conectar botones externos, interruptores y otros accesorios.
Es posible crear hasta 30 perfiles con diferentes configuraciones de botones y palancas. Los botones también se pueden desactivar para evitar presionarlos accidentalmente. Mientras tanto, existe la opción de emparejar hasta dos controladores Access y un controlador DualSense estándar como un controlador virtual. De esta manera, hasta tres personas pueden controlar el mismo personaje, lo que significa que los seres queridos y los cuidadores pueden brindar asistencia directa a quienes juegan.
Gemini Nano, el modelo de inteligencia artificial (IA) más pequeño de Google de la familia Gemini hasta la fecha, ahora se está expandiendo a todos los desarrolladores de Android. El modelo de IA hasta ahora ha estado impulsando funciones propias Google Aplicaciones como mensajes de Google y Pixel Recorder en teléfonos inteligentes Pixel compatibles y la serie Galaxy S24. Sin embargo, con esta expansión, incluso las aplicaciones de terceros podrán utilizar las capacidades del modelo. Al mismo tiempo, mellizo Según se informa, la aplicación permite a los usuarios compartir fotos directamente desde otras aplicaciones utilizando Android Share Sheet.
Gemini Nano se ha ampliado para incluir a todos los desarrolladores de aplicaciones de Android
El gigante tecnológico con sede en Mountain View presentó Gemini Nano en 2023 como su modelo de lenguaje más pequeño, extraído del modelo más grande de IA Gemini. Está diseñado para manejar tareas de IA en el dispositivo. Hasta ahora, se ha utilizado para potenciar las funciones de IA en los teléfonos Pixel más nuevos y en la serie Galaxy S24 en aplicaciones propias de Google.
Esto va a cambiar, como lo ha hecho la empresa. Anunciar Para los cuales estará disponible el modelo AI Androide Desarrolladores de aplicaciones que pueden implementar capacidades de Gemini Nano en sus aplicaciones utilizando AI Edge SDK a través de AICore. Google dijo que inicialmente los desarrolladores solo tendrán acceso a indicaciones de texto en los teléfonos inteligentes de la serie Pixel 9. Sin embargo, en el futuro se agregará soporte para más dispositivos y métodos.
Comparte fotos desde Android con la aplicación Gemini
cuerpo robótico Informes Gemini v1.0.668480831 permite compartir fotos en la aplicación Galería u otra aplicación de terceros directamente con Gemini mediante la Hoja para compartir de Android. Esta función puede resultar útil, especialmente si los usuarios tienen una gran cantidad de fotos almacenadas en sus dispositivos.
A través de esto, los usuarios pueden encontrar la foto en la aplicación donde la encontraron y enviarla directamente a Gemini. Una vez compartida, pueden abrir la aplicación y agregar una consulta. El personal de Gadgets 360 no pudo verificar la presencia de la función después de actualizar a la versión especificada.
Aparte de esto, el gigante tecnológico también actualizó Gemini Live con soporte para hindi y ocho idiomas regionales en el evento Google for India 2024, mientras que la descripción general de IA pronto estará disponible en cuatro idiomas regionales, además de hindi e inglés. .
Whether that will happen remains to be seen, but Googleis ending the era of free access to its Gemini API, signaling a new financial strategy within its AI development.
Developers previously enjoyed free access to lure them towards Google’s AI products and away from OpenAI’s, but that is set to change. OpenAI was first to market and has already monetized its APIs and LLM access. Now Google is planning to emulate this through its cloud and AI Studio services, and it seems the days of unfettered free access are numbered.
RIP PaLM API
In an email to developers, Google said it was shutting down access to its PaLM API (the pre-Gemini model which was used to build custom chatbots) to developers via AI Studio on August 15. This API was deprecated back in February.
The tech giant is hoping to convert free users into paying customers by promoting the stable Gemini 1.0 Pro. “We encourage testing prompts, tuning, inference, and other features with stable Gemini 1.0 Pro to avoid interruptions,” The email reads. “You can use the same API key you used for the PaLM API to access Gemini models through Google AI SDKs.”
Pricing for the paid plan begins at $7 for one million input tokens and rises to $21 for the same number of output tokens.
There is one exception to Google’s plans – PaLM and Gemini will remain accessible to customers paying for Vertex AI in Google Cloud. However, as HPCWirepoints out, “Regular developers on cheaper budgets typically use AI Studio as they cannot afford Vertex.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The Bill & Melinda Gates Foundation, one of the world’s top biomedical research funders, will from next year require grant holders to make their research publicly available as preprints, articles that haven’t yet been accepted by a journal or gone through peer review. The foundation also said it would stop paying for article-processing charges (APCs) — fees imposed by some journal publishers to make scientific articles freely available online for all readers, a system known as open access (OA).
The Gates Foundation is the first major science funder to take such an approach with preprints, says Lisa Hinchliffe, a librarian and academic at the University of Illinois Urbana–Champaign. The policies — which take effect on 1 January 2025 — elevate the role of preprints and are aimed at reducing the money the Gates Foundation spends on APCs, while ensuring that the research is free to read.
Who should pay for open-access publishing? APC alternatives emerge
But the policy’s ramifications are unclear. “Whether this will help the open-access movement or not, it’s hard to know,” Hinchliffe says. On the one hand, more research will become freely available in preprint form, she notes. On the other, the final published versions of articles, known as the version of record, might become harder to access. Under the revised rules, after sharing their manuscript as a preprint, authors will be allowed to submit it to the journal of their choice and will no longer be required to select the OA option.
“Our decision is driven by our goals of immediate access to research, global reuse and equitable action,” says Ashley Farley, programme officer of knowledge and research services at the Gates Foundation in Seattle, Washington. Grant recipients will still be required to post their preprints under a licence that allows their contents to be reused, she says. The foundation plans to publish the full policy within the next couple of weeks.
OA efforts
The Gates Foundation announced in 2015 that it would require its grant recipients to make their research articles freely available at the time of publication by placing them in open repositories. It later joined cOAlition S — a group of mainly European research funders and organizations supporting OA academic publishing — and endorsed the group’s Plan S, by which funders mandate that grant holders publish their work through an OA route.
Butthe Gates Foundation’s latest policy puts it on course to diverge from the group. It is not “entirely in line with cOAlition S”, says Johan Rooryck, executive director of the coalition, who is based in Leiden, the Netherlands. Whereas cOAlition S requires either an accepted manuscript or the version of record to be available OA, he says, “the Gates Foundation is clearly of the opinion that the preprint is sufficient”. He notes that the group allows for “a lot of leeway in policies” between its members, adding that the Gates policy continues to uphold key aspects of Plan S, such as promoting authors’ retention of rights to their accepted manuscripts.
The coalition has been examining the role of preprints in OA, but it’s a long way from adopting any related policy changes, Rooryck says. A document released by the group last year discussed the issue, and the coalition is gathering feedback from the research community through a survey open until 22 April. No decisions will be made on adopting any proposal before the end of the year.
Open-access reformers launch next bold publishing plan
Another difference between Plan S and the Gates policy is their stance on APCs. “Ending support for APC payments is not the cOAlition S policy, I can be very clear about that,” Rooryck says. “That’s a decision that Gates has taken. It’s not a decision that we, as cOAlition S, are ready to make by 1 January 2025.”
Ending support for APCs is a “very sensible plan” given the unsustainable increase of such charges in recent years, says Lynn Kamerlin, a computational biophysicist at the Georgia Institute of Technology in Atlanta. “The Gates Foundation plan is the open-access plan I would have liked to see when Plan S was announced.”
Juan Pablo Alperin, a scholarly-communications researcher at Simon Fraser University in Vancouver, Canada, notes that APCs are “inherently an unjust way” of supporting OA. “Stopping support for APCs sends a signal to the larger community, including the community of funders, that this mechanism is not a way forward,” he says.
Effects on publishing
It’s hard to predict the effects of the Gates policy on scientific publishing, says Hinchliffe. Some grant holders might find it harder to publish in OA journals, and rely more on preprints to disseminate their work. But others might continue to publish through OA journal routes, especially if they have other funding sources to cover the APCs, or if their institutions’ libraries have agreements with publishers to reduce the costs of OA publishing.
Although the Gates Foundation is a big funder — with a budget of US$8.6 billion in 2024 — it still funds only a modest percentage of the world’s research, Hinchliffe notes, and it’s not clear whether other funders will follow suit. Some, even among those that require OA publishing, already refuse to cover APCs.
Another potential consequence of the policy is that there might be a difference in the quality of a manuscript freely available as a preprint and its final version behind a paywall. In certain cases, people with access to the final version are going to be in a better position to avoid particular kinds of mistake than are those who rely solely on the preprint, Hinchliffe says. Kamerlin notes that an increasing number of preprint publishers allow authors to update their preprints as many times as necessary, which could ease that concern.
Farley says that there is growing evidence that errors in early versions of preprints are addressed quickly, “as there is a much broader pool of researchers to read and evaluate the preprint”. The foundation will provide grant recipients with a list of recommended preprint servers “that have demonstrated a level of checks that ensure the scientific validity of research”, she adds. It has also invested in a new preprint service called VeriXiv, “which will set new standards for preprint checking”.
A guide to Plan S: the open-access initiative shaking up science publishing
Some authors might well choose not to publish formally in journals, deciding that the preprint is enough, says Alperin. “I don’t see that as being a problem in itself,” he says. “Sometimes, the goal of a journal publication has been a negative force in science, encouraging people to focus on publishing in a particular journal when the goal should really be to do high-quality research and to ensure that it is communicated and that it reaches the right audience.”
Publishers contacted by Nature’s news team said they are still assessing the Gates policy. (Nature’s news team is editorially independent of its publisher, Springer Nature.) “We are reviewing the implications of the Bill & Melinda Gates Foundation’s new open-access policy and what it means for how we support their researchers,” said a spokesperson for the publisher Elsevier in a statement.
Roheena Anand, executive director of global publishing development and sales at the publisher PLOS, which is based in San Francisco, California, said in a statement that PLOS has already recognized that the APC model of OA publishing creates inequities. “We are committed to finding sustainable and equitable alternatives. That’s why we have launched several non-APC models and are also working with a multi-stakeholder working group,” she says, “to identify more equitable routes to knowledge-sharing beyond article-based charges.” She added that there is a risk that, without established alternatives, researchers funded by the Gates Foundation will revert to publishing their work behind paywalls. “PLOS’s newer business models offer one possible alternative.”
In an article announcing the changes, Estee Torok, a senior programme officer at the Gates Foundation, wrote that the organization has paid around $6 million in APCs per year since 2015. “We’ve become convinced that this money could be better spent elsewhere to accelerate progress for people,” she wrote. Farley says that the foundation plans to invest in more equitable OA models, such as ‘diamond OA’, a system in which publishers don’t charge fees to authors or readers, as well as preprint servers and other platforms and technologies for research dissemination.
At its simplest, RAM (Random Access Memory) is a type of computer memory, often referred to as short-term memory because it is volatile, meaning that the data is not saved when the power is turned off.
When business users switch on the computer, the operating system and applications are loaded to the computer RAM which is directly connected to the CPU, making the data quickly accessible for processing.
In corporate settings, RAM (memory modules) comes in different shapes and sizes. DIMM (Dual In-Line Memory Module) can be found in desktops, workstations and servers, while laptops require smaller physical size SODIMM (Small Outline DIMM).
A memory module contains several DRAM (Dynamic RAM) chips which is a type of semiconductor memory. Dynamic simply means that the data held by transistors in the chips is constantly refreshed. The number of DRAM chips found on a memory module varies depending on its capacity (8GB, 16GB, 32GB).
The lithography of DRAM chips has been revised and improved many times over recent decades and this has led not only to reductions in cost-per-bit, but also to reducing the dimensions of the component and increasing the clock rate. Overall, DRAM now delivers faster performance and higher capacities but uses less power which cuts energy costs, controls heat and extends battery life.
DRAM operate in one of two modes, synchronous or asynchronous. Asynchronous was the common DRAM technology used up until the end of the 1990s. Synchronous mode means that read, write and refresh operations are controlled with a system clock, synchronous with the clock speed of a computer’s CPU. Today’s computers use synchronous mode, or Synchronous Random Access Memory (SDRAM) which connects to the system board via a memory module.
Iwona Zalewska
DRAM business manager, Kingston EMEA.
New generations of DRAM
The latest version of SDRAM is DDR5 (Double Data Rate 5th generation), which comes in a range of standard speeds, starting with 4800M/Ts (megatransfers per second) and is an indicator of the speed at which data is transferred on and off the memory module. Approximately every seven years, a new memory generation is introduced, which is designed to accommodate the ever-increasing demand for speed, density and configurations in business computing environments. DDR5, for example, is designed with new features that provide higher performance, lower power and more robust data integrity for the next decade of computing. It debuted in 2021.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
IT decision makers who are considering purchasing memory must be aware that memory modules are not backwards compatible. DDR5 memory will not physically slot into a DDR4 or DDR3 memory socket. Within a memory generation, faster speeds are backwards compatible. For example, if a user buys a standard DDR5–5600MT/s module and uses it with a 12th Generation Intel processor, the speed memory will automatically ‘clock down’ to operate at 4800M/Ts, the speed supported by the host system or lower. This will vary depending on the model of the CPU and the number of memory modules installed in the system.
It’s essential to know the processor and motherboard already installed in the computer when planning on upgrading memory, but there are some other considerations too. Most PCs have four RAM sockets, some, such as workstations, have as many as eight, but laptops are likely to have only two accessible memory sockets, and in thin models, there may only be one.
Different types of RAM
Even though they may look similar and have the same function, the type of memory module found in HEDT (High-End Desktop) and servers is different than the ones found in PCs. Intel Xeon and the AMD Epyc range of server CPUs come with a higher number of CPU cores and more memory channels compared to Intel Core and AMD Ryzen desktop CPUs, therefore the specifications and features of the RAM for servers differ from the ones for PCs.
Server CPUs require Registered DIMM which supports the ECC (Error Correcting Code) feature, allowing to correct bits error occurring on the memory bus (between the memory controller and the DRAM chip), ensuring the integrity of the data. RDC (Registered Clock Driver) is an additional component found on RDIMM, not present on Unbuffered DIMM (UDIMM), and it ensures that all components on the memory module are operating at the same clock cycle allowing the system to remain stable when a high number of modules are installed.
The type of memory module made for desktops and laptops is generally Non-ECC Unbuffered DIMM. The data processed by users on these types of systems is considered less critical than the data being processed by servers which are hosting websites or handling online transactional processing, for example, and need to respect specific SLAs (Service-Level Agreements) and up times of 99.9999% 24/7. Non-ECC UDIMMs contain less components and features than RDIMMs and are therefore more affordable while remaining a reliable memory solution. Unbuffered types of RAM exist in both DIMM and SODIMM form factor.
Boosting performance
RAM memory is primarily sold in single modules, but it is also available in kits of two, four or eight, ranging in capacity from 4GB for DDR3 to 96GB for DDR5 (in single modules) and up to 256GB in kits (256GB is offered only as a kit of 8 in DDR4 and DDR 5). The configurations match the memory channel architecture, and when installed correctly can deliver a major boost in performance. To provide an example of the performance potential, upgrading a DDR5-4800MT/s module with a peak bandwidth of 38.4 GB/s to a dual channel setup, instantly expands the bandwidth to 76.8GB/s.
Accelerating speed
Users with industry standard speeds are limited to what their computer’s processor and motherboard will support, particularly if it won’t allow modules to be installed into a second memory bank. On a dual channel motherboard with four sockets, these are arranged in two memory banks, where each memory channel has two sockets. If a DDR5 user can install modules into a second bank, in most cases, the memory may be forced to clock-down to a slower speed to allow for limitations inside the processor.
Users looking for a considerable boost, such as gamers, can opt for overclockable memory. This can be done safely using Intel XMP and AMD EXPO profiles however, professional help is advisable. Selecting the right gaming memory for overclocking a system means deciding on price verses speed versus capacity, the potential limitations of motherboards and processors, and RGB versus non-RGB (to bring in the benefits of lighting).
Useful glossary of terms
Apart from the acronyms we’ve already explained above, here are some additional terms that it will be useful to know:
CPU – Central Processing Units are the core of the computer.
PMIC – Power Management Integrate Circuits help to regulate the power required by the components of the memory module. For server-class modules, the PMIC uses 12V; for PC-class modules, it uses 5V.
SPD hub – DDR5 uses a new device that integrates the Serial Presence Detect EEPROM with additional features, manages access to the external controller and decouples the memory load on the internal bus from external.
On-die ECC – Error Correction Code that mitigates the risk of data leakage by correcting errors within the chip, increasing reliability and reducing defect rates.
MHz – MHz is an abbreviation of megahertz and means a million cycles per second, or one million hertz. This unit of frequency measurement is used to denote the speed at which data moves within and between components.
MT/s is short for megatransfers (or million transfers) per second and is a more accurate measurement for the effective data rate (speed) of DDR SDRAM memory in computing.
Non-binary memory – The density of DRAM chips usually doubles with each iteration, but with DDR5, an intermediary density – 24Gbit – was introduced, which provides more flexibility and is called non-binary memory.
GB/s – Gigabytes per second. A Gigabyte is a unit of data storage capacity that is approximately 1 billion bytes. It has been a common unit of capacity measurement for data storage products since the mid-1980s.
Link!
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
A man with a balloon for a head is somehow not the weirdest thing you’ll see today thanks to a series of experimental video clips made by seven artists using OpenAI’s Sora generative video creation platform.
Unlike OpenAI‘s ChatGPT AI chatbot and the DALL-E image generation platform, the company’s text-to-video tool still isn’t publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.
While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI’s Artist In Residence Alex Reben’s 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller’s video depicts models melded with what looks like translucent stained glass.
Not all the videos are so esoteric.
OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)
If we had to give out an award for most entertaining, it might be multimedia production company shy kids’ “Air Head”. It’s an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.
Sora’s ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids’ Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it’s a funny and extremely surreal little movie.
But wait, it gets stranger.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III’s “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.
OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man’s balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?
That OpenAI has invited creatives to take Sora for a test run is not surprising. It’s their livelihoods in art, film, and animation that are most at risk from Sora’s already impressive capabilities. Most seem convinced it’s a tool that can help them more quickly develop finished commercial products.
“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It’s enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.
Go watch the clips but don’t blame us if you wake up in the middle of the night screaming.
Google is finally giving some attention to the UI on its stock apps. It is bringing a unified look and feel to its apps and ecosystem of products. An upcoming design change to the Google Play Store will make it easier to access search within the app.
Google Play Store gets Search option on the bottom bar
In December 2023, Google started testing the placement of the search icon on the bottom bar of the Play Store app. Now, the new placement seems to be going live for some Android users. Some Galaxy users might be able to see this change when they open the Google Play Store the next time. It certainly makes accessing the search screen easier, as it now sits closer to your fingers.
With this change, there are now five icons on the bottom bar of the Play Store. Earlier, there were four icons: Games, Apps, Offers, and Books. However, when you access the new search option, it takes you to a new screen where the search bar is at the top, making this whole change a bit strange. This screen also displays search suggestions and trending app and game searches from around the world.
This new design is visible with the latest version of Google Play Store (version 40.1.19-31), but not everyone would have received it by now. It might take a few weeks before this design change appears on your device.
Farhana Sultana approaches research on environmental harms and social inequities in tandem.Credit: Wainwright Photos
FARHANA SULTANA: Collaborate to advance water justice
Throughout my childhood in Dhaka, Bangladesh, the frantic call ‘Pani chole jaitese!’ (‘The water is running out!’) prompted my family, along with the entire neighbourhood, to scramble to fill pots and buckets with water before the taps ran dry. I witnessed women and girls walk long distances to secure this basic necessity for their families, long before water governance became central to my academic career. Amid water insecurity, the opposite extreme was just as familiar — going to school through devastating floods and experiencing the fall-out from disastrous cyclones and storm surges.
Municipal water services in Dhaka also struggled to meet the growing demands of a rapidly urbanizing and unequal megacity. Access to electricity — needed to run water pumps — was sporadic, and there weren’t enough treatment plants to ensure clean water for millions of residents.
These early experiences fuelled my dedication to tackling water injustices. Today, as an interdisciplinary human geographer with expertise in Earth sciences, and with policy experience gained at the United Nations, I approach environmental harms and social inequities in tandem — the root causes that connect both must be addressed for a just and sustainable future. My research also encompasses climate justice, which is inextricably linked with water justice. Climate change intensifies water-security concerns by worsening the unpredictability and severity of hazards, from floods and droughts to sea-level rise and water pollution.
Such events hit marginalized communities the hardest, yet these groups are often excluded from planning and policymaking processes. This is true at the international level — in which a legacy of colonialism shapes geopolitics and limits the influence of many countries in the global south on water and climate issues — and at the national level.
However, collaborative work between affected communities, activists, scholars, journalists and policymakers can change this, as demonstrated by the international loss-and-damage fund set up last year to help vulnerable countries respond to the most serious effects of climate-related disasters. The product of decades of globally concerted efforts, this fund prioritizes compensation for low-income countries, which contribute the least to climate change but often bear the brunt of the disasters.
I also witnessed the value of collaboration and partnership in my research in Dhaka. Community-based groups, non-profit organizations and activists worked with the Dhaka Water Supply and Sewerage Authority to bring supplies of drinking water at subsidized prices to marginalized neighbourhoods, such as Korail, where public infrastructure was missing.
Globally, safe water access for all can be achieved only by involving Indigenous and local communities in water governance and climate planning. People are not voiceless, they simply remain unheard. The way forward is through listening.
Tara McAllister is exploring the interface between Mātauranga Māori (Māori Knowledge) and non-Indigenous science.Credit: Royal Society of New Zealand
TARA MCALLISTER: Let Māori people manage New Zealand’s water
I have always been fascinated by wai (water) and all the creatures that live in it. Similar to many Indigenous peoples around the world, Māori people have a close relationship with nature. Our connection is governed by geneaology and a concept more akin to stewardship rights than to ownership. This enables us to interact with our environment in a sustainable manner, maintaining or improving its state for future generations.
I was privileged to go to university, where I studied marine biology. I then moved to the tribal lands of Ngāi Tahu on Te Waipounamu, the South Island of New Zealand, which triggered my passion for freshwater ecosystems. Intensive agriculture is placing undue pressure on the whenua (land) and rivers there. Urgent work was required. Undertaking a PhD in freshwater ecology, I studied the causes of toxic benthic algal blooms in rivers. For me, there is no better way to work than spending my days outside, with my feet in the water.
A worker fills people’s water containers from a tanker in Kolkata, India.Credit: Rupak De Chowdhuri/Reuters
Having just started a research position at Te Wānanga o Aotearoa, a Māori-led tertiary educational institution, I am now exploring the interface between Mātauranga Māori (Māori Knowledge) and non-Indigenous science, and how these two systems can be used alongside each other in water research. I have also been working on nurturing relationships with mana whenua, the community that has genealogical links to the area where I live, so that I can eventually work in the community’s rivers and help to answer scientific questions that its members are interested in.
Despite a perception that Aotearoa (New Zealand) is ‘clean and green’, many of its freshwater ecosystems are in a dire state. Only about 10% of wetlands remain, and only about half of rivers are suitable for swimming. Water resource management is challenging, because of a change this year to a more right-wing government. The current government seems intent on revoking the National Policy Statement for Freshwater Management, established in 2020.
This policy has been crucial in improving the country’s management of freshwater resources. Although not perfect, it does include Te Mana o te Wai — a concept that posits that the health and well-being of water bodies and ecosystems must be the first priority in such management. It is now in danger of being repealed.
I think that, ultimately, our government’s inability to divulge control and power to Māori people to manage our own whenua and wai is what limits water resource management. More than any change in policy, I would like to see our stolen lands and waters returned.
Suparana Katyaini calls for more policy support for Indigenous-led water management.Credit: Milan George Jacob
SUPARANA KATYAINI: Consider water, food and land together
Growing up in New Delhi, I always had easy access to drinking water — until the summer of 2004, when a weak monsoon triggered a water crisis and the city had to rely on water tankers. I realized then that good management of water resources supports our daily lives in ways we take for granted until we experience scarcity.
My professional journey in research and teaching has been motivated by this experience. During my environmental studies of water poverty in India, I noticed that the field relied largely on quantitative data over qualitative insights — the degree of water-resources availability, access and use are typically assessed through metrics such as the water-availability index or the water-demand index. But in many places, Indigenous and local communities, including farmers and women in any occupation, have collectively developed skills to weather periods of water scarcity. Paying attention to these skills would lead to better water management. For example, the issue of food and nutritional insecurity in water-scarce areas in the state of Odisha, India, is being solved by Bonda people through revival of the crop millet, using varieties that are nutritious, water-efficient and climate-resilient.
But these efforts need more policy support. My current work at the Council on Energy, Environment and Water explores how water, food and land systems are interlinked in India, and how better understanding of these relationships can inform policies. I am looking to identify similarities and differences in objectives of national and regional policies in each sector, as well as exploring whom they affect and their intended impacts. The aim is to move towards unifying water, food and land governance.
Michael Blackstock examines climate change from a water-centred perspective.Credit: Mike Bednar
MICHAEL BLACKSTOCK: Shift attitudes towards water
In 2000, I conducted an ethnographic interview with Indigenous Elder Millie Michell from the Siska Nation in British Columbia, Canada, that transformed my interest in water from intellectual curiosity to passion. She passed a torch to me that fateful day. During our conversation for my research about the Indigenous spiritual and ecological perspective on water, she asked me: “Now that I shared my teachings and worries about water, what are you going to do about it?” She died of a stroke a few hours later.
As an independent Indigenous scholar, I went on to examine climate change from a water-centred perspective — drying rivers, downpours, floods and melting ice caps are all water. This approach, for which I coined the term ‘blue ecology’, interweaves Indigenous and non-Indigenous ways of thinking. It acknowledges water’s essential role in generating, sustaining, receiving and, ultimately, unifying life on Mother Earth. This means changing our collective attitude towards water.
In 2021, I co-founded the Blue Ecology Institute Foundation in Pavilion Lake, Canada, which teaches young people in particular to acknowledge the spiritual role of water in nature and in our lives, instead of taking it for granted as a commodity or ecosystem service. Giving back to nature with gratitude is also crucial. Such restrained consumption — taking only what is needed — would give abused ecosystems time to heal.
A focus on keeping water healthy can help to guide societies towards more sustainable environmental policies and climate-change resilience — and ensure that future generations will survive with dignity. Critics say, ‘Blue ecology is kinda out there.’ In my view, however, ‘here’ is not working.
During its recent Checkup 2024 event, Google offered an important update on Fitbit Labs giving us an idea when the highly-anticipated Fitbit AI-powered assistant will launch.
The tech giant was coy about the official launch of its Fitbit chatbot, merely stating it’ll come out later this. Additionally, it’ll see a limited release available only to the small group of Android users currently enrolled in the program on the Fitbit app.
Why are we so excited about this? The chatbot is a fitness assistant that’ll answer all of your burning questions about your personal Fitbit data in a casual way. It’s supposed to replicate what it’s like to talk to a personal trainer. Google’s tech is said to deliver “personalized insights” on how your fitness journey is going and coach you on things you can do to improve.
The Fitbit AI will also create charts and “help [people] understand [their] own data better.” As an example, chatbot responses are going to display Fitbit’s Active Zone Minutes alongside an average sleep score.
(Image credit: Google/Fitbit)
All of this information was hinted at during the AI’s initial reveal back in early October, but as we get closer to the launch, Google is filling in some of the gaps. There are still a lot of murky details. We reached out to the company asking if there are plans to include other types of charts to the chatbot and if it could give us an idea on when the AI will officially roll out. We’ll update this story if we get any new info.
If you’re interested in taking the chatbot out for a spin, you can join Fitbit Labs at any time; however, you will need a subscription to Fitbit Premium first.
Google’s Search upgrade
Besides the chatbot, Google also talked about upgrading its search engine to include detailed health information about certain conditions. “Images and diagrams from high quality [online] sources” will be present in search results allowing people to better understand the symptoms they may be feeling. The new visual resources will be rolling out globally to Google Search on mobile within the coming months. No word if a similar feature will arrive on desktop.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
What do you do when you forget to load a file onto your laptop and it’s now languishing at the other end of your house on a different device? In my case, I usually end up pausing whatever I’m doing, getting up and marching over to the file location, uploading it to a cloud server, then heading back to where I started and downloading the file onto the device I was originally using. In short, it’s a hassle.
This is a conundrum I often faced until very recently. Well, it probably sounds like a very minor conundrum, and I can’t really deny that. But sometimes the most minor things can feel pretty aggravating when they happen again and again. Convenience is worth a lot more than you’d think.
Being a forgetful person, this is not an uncommon problem for me. Fortunately, I’ve come across an app that allows me to fetch those forgotten files while remaining safely ensconced on my sofa. It’s the lazy man’s dream.
The app is called Screens 5, and it works like a portal from one device to the next. For example, I can open Screens 5 on my iPad and see a list of all my connected devices. I then tap on one and it loads up that computer, tablet, or phone right from the iPad. It’s like I’m sitting in front of the connected device when I’m on the other side of the house.
Control everything
(Image credit: Future)
That makes it sound like Screens 5 is a small fix to a small problem, but its capabilities are much wider. As long as your target device is switched on, you can access it from anywhere, even on the other side of the globe. It’s especially helpful if you know your target device will always be on, such as if you want to grab a file off of a home server. In that case, as long as you’ve got Screens 5 installed everywhere, you’re never far away from your other devices.
Crucially, Screens 5 isn’t just a viewport – you can directly control one device from the other. So, if I’m using Screens on my iPad, I can just slide my finger across the display and it moves the mouse on my Mac. I can open apps, start typing, copy and paste files, and more. Better yet, Screens 5 even lets me drag and drop files from the connected device onto the one that’s sitting in front of me, and vice versa. There’s no need to upload anything to Dropbox and no need to email anything to myself, I can just move the file where it needs to go in seconds.
(Image credit: Future)
Sure, I know what you’re thinking – there are already fast ways to share files between devices automatically, such as syncing things using a cloud storage service. That’s true, but those are pretty one-dimensional solutions. With Screens 5, I can take control of another device as well as sync files to it, regardless of its operating system and form factor. That’s something the likes of Google Drive and iCloud can’t offer.
Besides, Screens 5 is useful for much more than just file sharing. You can work on a document you left open on a different computer, update your device from miles away, or take a quick screenshot of one device from another.
It’s also a neat way to help someone with a problem they’re having on their device. Instead of going around in circles trying to describe the fix to them, you can just take charge of the target device and apply the solution yourself.
(Image credit: Future)
There are some complications to be aware of. To get Screens 5 to work with my Windows PC, for example, I had to install a complementary component called a VNC server. That sounds complicated but setting it up is a breeze. Screens 5 requires a second app called Screens Connect to (you guessed it) connect all your devices. Installing Screens Connect on Windows also installs a VNC server, so the hard work is finished by the time you’ve closed the setup wizard.
I also wouldn’t recommend trying to control your iMac or Windows PC from an iPhone screen, as controls can get very fiddly on such a small device. But you can at least zoom in if required. So if you need to use Screens and all you have in hand is your phone, it’s doable.
Overall, though, those are minor nuisances and not ones I experience very often. The VNC server in particular is a one-off problem, and Screens 5’s developer has included enough tools – such as a floating bar with thumb-sized buttons for common controls like the function keys and system settings – that make navigating your way around a small phone screen a little easier.
At the end of the day, I’m glad to have come across Screens 5. I may not be able to cure my forgetfulness completely, but at least I’ve got an app that can make it a little less problematic.