Hay toneladas de calientes ofertas de viernes negro De hecho, vale la pena echarle un vistazo, pero aquí tienes uno que puede ayudarte a mantener la temperatura correcta. Termostato Nest Learning de cuarta generación de Google Puede ser tuyo por $225Tiene un descuento de $55. La oferta está disponible en Wellbots y deberás utilizar el código. IngpfNLT55 Al pagar para recibir ahorros.
Google quitar el velo La última versión del Nest Thermostat regresa en agosto. Este es un acuerdo sólido para un nuevo producto.
Google
El último termostato Nest de Google está disponible con un descuento de $55 en Wellbots. Tiene una pantalla más grande y personalizable que los modelos anteriores y utiliza inteligencia artificial para lecturas y sugerencias de ahorro de energía.
El termostato utiliza inteligencia artificial para proporcionar lo que Google afirma que son lecturas más precisas y para hacer sugerencias sobre cómo ahorrar energía y reducir las facturas de servicios públicos. El termostato también ajustará la configuración por sí solo, dependiendo de factores como la temperatura ambiente. Para medir esto, se incluye un sensor de temperatura inalámbrico con el dispositivo. Según Google, el sensor puede funcionar hasta tres años antes de que sea necesario reemplazar la batería. Puedes comprar más sensores ($40 cada uno o tres por $100) y conectar hasta seis sensores al termostato Nest mientras los distribuyes por toda tu casa.
La pantalla de este modelo es un 60 por ciento más grande que la de sus predecesores. La pantalla también es más personalizable. Tienes una variedad de caras para elegir, como en un reloj inteligente. Es posible hacer que el termostato parezca un reloj o cambiar sus colores.
Una característica interesante es que el termostato Nest utiliza los sensores de radar integrados de Soli para determinar qué tan lejos estás de él. La interfaz de usuario se ajustará automáticamente según lo cerca que esté. Cuanto más retrocedas, mayor será el tamaño de fuente para mejorar la legibilidad.
El último Nest Learning Thermostat de cuarta generación de Google sale a la venta en solo un mes El dispositivo es lo que reduce el precio a 260 dólares. Simplemente ingrese el código “20ENGNLT4” al finalizar la compra. Este es un precio muy bajo, principalmente porque el producto acaba de lanzarse.
Esta no es una mejora iterativa del Nest Learning Thermostat anterior. Es un rediseño completo, con una nueva apariencia y muchas características actualizadas. La pantalla LCD es un 60 por ciento más grande que la del antiguo dispositivo de tercera generación y tiene un frente curvo que elimina la apariencia de biseles. Ahora todo es una pantalla.
Google
Este termostato acaba de ser lanzado el 20 de agosto.
Esta pantalla más grande te permite personalizar muchas más cosas, con interfaces que son tan personalizables como un reloj inteligente. Incluso puede parecer un reloj analógico normal. La interfaz de usuario ajusta automáticamente lo que se muestra en la pantalla dependiendo de qué tan lejos esté del termostato, gracias a los sensores de radar Soli integrados.
El nuevo Nest está equipado con inteligencia artificial, cuyo objetivo es proporcionar lecturas más precisas y ofrecer medidas potenciales para ahorrar dinero en su factura mensual de servicios públicos. Es un termostato inteligente, por lo que se puede programar para que realice acciones por sí solo, dependiendo de la temperatura ambiente y otros factores.
Para ello, el producto viene equipado con un sensor de temperatura inalámbrico que se puede colocar en cualquier lugar dentro del alcance. Cada dispositivo Nest se puede integrar con hasta seis de estos sensores, y hay más disponibles por $40 cada uno o tres por $100. El producto también viene con una placa embellecedora para cubrir defectos de pintura y paneles de yeso y una placa de acero para instalaciones de cajas eléctricas. También es probable que esté relativamente preparado para el futuro, ya que la tercera generación de Nest debutó en 2015.
el sigue @engadgetdeals en Twitter y suscríbase al boletín Engadget Deals para conocer las últimas ofertas tecnológicas y consejos de compra.
La pantalla LCD es un 60 por ciento más grande que la de (Ahora obsoleto) 3ra generaciónCon un frente curvo que elimina la apariencia de biseles. Básicamente, ahora todo es una pantalla, sin el anillo gigante de plástico negro alrededor del exterior. Esto nos lleva a otra característica nueva. La pantalla más grande permite una mayor personalización, y el Nest de cuarta generación ofrece caras personalizables. Esto funciona igual que con los relojes inteligentes. Puedes convertir la esfera en un reloj, cambiar los colores o convertir el fondo en algo artístico.
Google
No importa qué cara elijas, la interfaz de usuario ajusta automáticamente lo que se muestra en la pantalla dependiendo de qué tan lejos estés del termostato. Por ejemplo, la fuente será más grande cuanto más retrocedas. Todo ello gracias a los sensores de radar integrados Soli.
En cuanto a las operaciones internas reales del termostato, el nuevo Nest aprovecha la inteligencia artificial para obtener lecturas más precisas y ofrecer posibles medidas de ahorro en su factura de energía mensual. Incluso actuará por sí solo dependiendo de la temperatura ambiente y otros factores. El termostato también viene con un sensor de temperatura inalámbrico que puedes colocar en cualquier lugar. Esto es útil para detectar puntos fríos o cuando se intenta encontrar la temperatura promedio perfecta en toda la casa. Google dice que este dispositivo inalámbrico puede durar tres años antes de que sea necesario reemplazar la batería. También puedes comprarlos por separado, con cada Nest integrado con hasta seis sensores.
El dispositivo cuenta con algunas características respetuosas con el medio ambiente. La batería interna está hecha íntegramente de cobalto reciclado y el embalaje no contiene plástico. Los pedidos anticipados del nuevo Nest Learning Thermostat ya están abiertos y los envíos comenzarán el 20 de agosto. Está disponible en tres colores, incluidos plateado, negro y dorado. Cada termostato tiene un precio de $280, con sensores de temperatura adicionales a $40 o $100 por un paquete de tres.
Intel has launched a new AI processor series for the edge, promising industrial-class deep learning inference. The new ‘Amston Lake’ Atom x7000RE chips offer up to double the cores and twice the higher graphics base frequency as the previous x6000RE series, all neatly packed within a 6W–12W BGA package.
The x7000RE series packs more performance into a smaller footprint. Boasting up to eight E-cores it supports LPDDR5/DDR5/DDR4 memory and up to nine PCIe 3.0 lanes, delivering robust multitasking capabilities.
Intel says its new processors are designed to withstand challenging conditions, enduring extreme temperature variations, shock, and vibration, and to operate in hard-to-reach locations. They offer 2x SATA Gen 3.2 ports, up to 4x USB 3.2 Gen 2 ports, a USB Type-C port, 2.5GbE Ethernet connection, along with Intel Wi-Fi, Bluetooth, and 5G platform capabilities.
Embedded, industrial, and communication
The x7000RE series consists of four SKUs, all suitable for embedded, industrial, and communication use under extended temperature conditions. The x7211RE and x7213RE have 2 cores and relatively lower base frequencies, while the x7433RE has 4 cores, and the x7835RE has 8 cores with higher base frequencies.
All four SKUs support a GPU execution unit count of either 16 or 32, and Intel’s Time Coordinated Computing and Time-Sensitive Networking GbE features. The x7000RE offer integrated Intel UHD Graphics, Intel DL Boost, Intel AVX2 with INT8 support, and OpenVINO toolkit support.
Intel says the chips will allow customers to easily deploy deep learning inference at the industrial edge and in smart cities, and “enhance computer vision solutions with built-in AI capabilities and ecosystem-enabled camera modules” as well as “capture power- and cost-efficient performance to enable latency-bounded workloads in robotics and automation.”
More from TechRadar Pro
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Artificial General Intelligence, when it exists, will be able to do many tasks better than humans. For now, the machine learning systems and generative AI solutions available on the market are a stopgap to ease the cognitive load on engineers, until machines which think like people exist.
Generative AI is currently dominating headlines, but its backbone, neural networks, have been in use for decades. These Machine Learning (ML) systems historically acted as cruise control for large systems that would be difficult to constantly maintain by hand. The latest algorithms also proactively respond to errors and threats, alerting teams and recording logs of unusual activity. These systems have developed further and can even predict certain outcomes based on previously observed patterns.
This ability to learn and respond is being adapted to all kinds of technology. One that persists is the use of AI tools in envirotech. Whether it’s enabling new technologies with vast data processing capabilities, or improving the efficiency of existing systems by intelligently adjusting inputs to maximize efficiency, AI at this stage of development is so open ended it could theoretically be applied to any task.
Roman Khavronenko
Co-Founder of VictoriaMetrics.
AI’s undeniable strengths
GenAI isn’t inherently energy intensive. A model or neural network is no more energy inefficient than any other piece of software when it is operating, but the development of these AI tools is what generates the majority of the energy costs. The justification for this energy consumption is that the future benefits of the technology are worth the cost in energy and resources.
Some reports suggest many AI applications are ‘solutions in search of a problem’, and many developers are using vast amounts of energy to develop tools that could produce dubious energy savings at best. One of the biggest benefits of machine learning is its ability to read through large amounts of data, and summarize insights for humans to act on. Reporting is a laborious and frequently manual process, time saved reporting can be shifted to actioning machine learning insights and actively addressing business-related emissions.
Businesses are under increasing pressure to start reporting on Scope 3 emissions, which are the hardest to measure, and the biggest contributor of emissions for most modern companies. Capturing and analyzing these disparate data sources would be a smart use of AI, but would still ultimately require regular human guidance. Monitoring solutions already exist on the market to reduce the demand on engineers, so taking this a step further with AI is an unnecessary and potentially damaging innovation.
Replacing the engineer with an AI agent reduces human labor, but removes a complex interface, just to add equally complex programming in front of it. That isn’t to say innovation should be discouraged. It’s a noble aim, but do not be sold a fairy tale that this will happen without any hiccups. Some engineers will be replaced eventually by this technology, but the industry should approach it carefully.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Consider self-driving cars. They’re here, they’re doing better than an average human-driver. But in some edge cases they can be dangerous. The difference is that it is very easy to see this danger, compared to the potential risks of AI.
Today’s ‘clever’ machines are like naive humans
AI agents at the present stage of development are comparable to human employees – they need training and supervision, and will gradually become out of date unless re-trained from time to time. Similarly, as has been observed with ChatGPT, models can degrade over time. The mechanics that drive this degradation are not clear, but these systems are delicately calibrated, and this calibration is not a permanent state. The more flexible the model, the more likely it can misfire and function suboptimally. This can manifest as data or concept drift, an issue where a model invalidates itself over time. This is one of many inherent issues with attaching probabilistic models to deterministic tools.
A concerning area of development is the use of AI in natural language inputs, trying to make it easier for less technical employees or decision makers to save on hiring engineers. Natural language outputs are ideal for translating the expert, subject specific outputs from monitoring systems, in a way that makes the data accessible for those who are less data literate. Despite this strength even summarizations can be subject to hallucinations where data is fabricated, this is an issue that persists in LLMs and could create costly errors where AI is used to summarize mission critical reports.
The risk is we create AI overlays for systems that require deterministic inputs. Trying to make the barrier to entry for complex systems lower is admirable, but these systems require precision. AI agents cannot explain their reasoning, or truly understand a natural language input and work out the real request in the way a human can. Moreover, it adds another layer of energy consuming software to a tech stack for minimal gain.
We can’t leave it all to AI
The rush to ‘AI everything’ is producing a tremendous amount of wasted energy, with 14,000 AI startups currently in existence, how many will actually produce tools that will benefit humanity? While AI can improve the efficiency of a data center by managing resources, ultimately that doesn’t manifest into a meaningful energy saving as in most cases that free capacity is then channeled into another application, using any saved resource headroom, plus the cost of yet more AI powered tools.
Can AI help achieve sustainability goals? Probably, but most of the advocates fall down at the ‘how’ part of that question, in some cases suggesting that AI itself will come up with new technologies. Climate change is now an existential threat with so many variables to account for it stretches the comprehension of the human mind. Rather than tackling this problem directly, technophiles defer responsibility to AI in the hope it will provide a solution at some point in future. The future is unknown, and climate change is happening now. Banking on AI to save us is simply crossing our fingers and hoping for the best dressed up as neo-futurism.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The church at Ntarama, a 45-minute drive south of Rwanda’s capital, Kigali, is a red-brick building about 20 metres long by 5 metres wide. Inside are features seen in Catholic churches around the world: pews for congregation members, an altar, stained-glass windows and a cross adorning the entrance. Then there are the scars of the unimaginable: piles of blood-stained clothing hanging along the walls and glass cabinets containing more than 260 human skulls, many fractured or shattered, some with rusted weapons still penetrating them. Nearby, wooden sticks and roughly carved clubs lean against the altar.
Ntarama is the site of one of the many massacres that occurred during the 1994 genocide against the Tutsi in Rwanda — one of the worst atrocities of the late twentieth century. Starting on 7 April that year, in 100 days of horrifying violence, members of the Hutu ethnic group systematically killed an estimated 800,000 Tutsi — or more than one million, according to the Rwandan government and other sources. The killers ranged from militias to ordinary citizens, with neighbours turning on neighbours. Many moderate Hutu and some of the Twa minority group were also killed.
Rwanda: From killing fields to technopolis
More than 5,000 Tutsi were murdered at Ntarama, among them babies, children and pregnant women, many of whom were raped before they were killed, says Evode Ngombwa, site manager at the Ntarama Genocide Memorial, one of six sites in Rwanda that commemorate the atrocity. “People used money to bribe the perpetrators so that they could choose the way of being eliminated. Instead of killing them with machetes, they could choose to be shot,” says Ngombwa as he walks me through the church. With more remains being found each year, about 6,000 people are now buried there in mass graves.
This month, Rwanda and the world begin commemorations to mark 30 years since the start of this atrocity. The genocide is now one of the most studied of its kind. Researchers from social and political scientists to mental-health specialists, geneticists and neuroscientists have investigated the event and its aftermath in a way that hadn’t been possible for previous atrocities.
This work is especially important now in light of violent crises in several parts of the world, including in Ukraine, Israel and Gaza, Sudan and the Democratic Republic of the Congo. Although there is debate about whether these conflicts meet the definition of genocide, some share similar characteristics. Research conducted into atrocities such as the genocide in Rwanda can help to inform responses and longer-term approaches to healing.
Despite the difficulties of these studies, researchers say that they are working towards developing a theory of genocide and the conditions that spur mass violence. They are providing guidance for first responders, as well as those involved in peacebuilding and supporting survivors of other systematic mass murders and of war. Some of their approaches have been used in other conflicts. And the research on Rwanda is offering lessons for how scholars can improve studies of similar events.
At a vigil in April 2019, young Rwandans commemorate the 25th anniversary of the genocide.Credit: Yasuyoshi Chiba/AFP/Getty
“Genocide studies are important,” says Phil Clark, an international-politics researcher at SOAS, part of the University of London, who has studied Rwanda for more than two decades. “If we can start to understand why and how genocides happen, and especially if we can compare genocides across the world, we should ideally be able to build a general theory of how these terrible events are even possible.”
One of the lessons emerging from Rwanda is the importance of involving — and supporting — local researchers, whose work, language skills and access to traumatized communities can be essential for understanding the roots of violence and the best techniques for reconciliation. This can be difficult — in Rwanda’s case because the genocide wiped out almost its entire academic community. Now, through programmes aimed at elevating local scholars’ voices, their work is finally reaching a wider audience.
Patterns of violence
Before 1994, the field of genocide studies was dominated by the Holocaust — the systematic killing of 6 million Jewish people by Nazi Germany during the Second World War. “It’s only in the last 20 years that other genocides have entered the discussion,” says Clark. But research on Rwanda didn’t start immediately. “It was only maybe 10–15 years after the genocide that scholars started to really interrogate this question of what drove hundreds of thousands of everyday civilians to participate in mass violence.”
Scholars say that it’s important not to forget the genocide’s strong link to colonialism in Rwanda. In the early 1900s, Belgian colonizers began formally dividing Rwandan people into social classes: Hutu, Tutsi and Twa. Designations were often based on pseudoscientific ideas, including phrenology and arbitrary observations, such as how many cattle a person owned. Ethnic tensions between Hutu and Tutsi intensified over the decades and several massacres of Tutsi occurred in the period leading up to 1994. This set the stage for a descent into genocide — a legal term that is defined by the perpetration of certain crimes that are intended to destroy a particular group, and is codified by the United Nations’ 1948 Genocide Convention.
Each genocide is unique, says Timothy Longman, a political scientist at Boston University in Massachusetts, who first went to Rwanda in 1992 and returned in 1995 as a researcher with Human Rights Watch, an international non-governmental organization that was one of the first to investigate the event. “But there also are some common patterns,” he says. Researchers can learn a lot from studying cases such as Rwanda, the Holocaust and other genocides, he says. “It helps you to prevent violence from happening elsewhere.”
One of the main scientific contributions of studies so far are the insights from mental-health researchers, many of whom were on the ground in the immediate aftermath. Over the past three decades, they have documented the initial trauma of an entire country and the slow recovery of survivors and their children, many of whom are prone to being retraumatized. With few available resources, Rwanda had to build up its mental-health services and it has gained unique experience in responding to the atrocity’s aftermath.
Source: Y. Kayiteshonga et al. Rwanda Mental Health Survey 2018 (Govt of Rwanda, 2021).
At the Rwanda Biomedical Centre (RBC) in Kigali, the nation’s main health organization, Jean Damascène Iyamuremye recalls his experience of 1994. “I witnessed everything that happened.” Iyamuremye was a 28-year-old training to be a medical assistant, but the genocide spurred him to specialize in mental health. He was among the first medical staff supporting survivors. “We were like firefighters,” says Iyamuremye, who is now director of the psychiatric unit in the RBC’s mental-health division, which oversees countrywide services.
The first care came mostly from outsiders. Non-governmental organizations provided psychological interventions such as counselling for the survivors, most of whom had experienced physical violence as well as unimaginable emotional trauma from the mass killings they’d witnessed. After the genocide, 96% of Rwandans experienced post-traumatic stress disorder (PTSD) as a result of the extreme violence1.
It took time for the country to develop its own mental-health resources. In 1994, Rwanda had only one psychiatrist, Naasson Munyandamutsa, who was living in Switzerland at the time and lost most of his family in the violence. Munyandamutsa returned quickly to Rwanda to work at the country’s sole psychiatric hospital, where he began training mental-health responders and psychiatrists.
While Munyandamutsa, who died in 2016, led the training of practitioners in Rwanda, many Rwandans went overseas to train. But about half didn’t return, says Iyamuremye.
It wasn’t until 2014 that Rwanda had its own school of psychiatry, at the University of Rwanda in Kigali. Even now, the country has only 16 psychiatrists, 13 of whom graduated from that facility, to serve a fast-growing population of 13.5 million.
Evidence-based interventions for survivors, such as counselling, cognitive behavioural therapy and medication, have continued — but people still bear significant mental scars from their experiences (see ‘Complex consequences’). In Rwanda’s most comprehensive mental-health survey yet, conducted by the RBC in 2018, about 28% of genocide survivors reported PTSD symptoms, compared with 3.6% of the general population (see ‘Trauma’s long shadow’).
Sources: Ref. 1; A. Eytan et al. Int. J. Soc. Psychiatr.61, 363–372 (2015); Y. Kayiteshonga et al. Rwanda Mental Health Survey 2018 (Govt of Rwanda, 2021).
Long-term support for survivors is important, because many can become retraumatized. For example, media reports about violence in nearby parts of the Democratic Republic of the Congo can bring back memories, says Iyamuremye. And yearly commemorations that last from April to July, called kwibuka in the national language, Kinyarwanda, bring challenges. “You will see people who fall, who are agitated, who cry” because what they experience triggers a memory, says Iyamuremye.
For this year’s commemorations, the RBC and other organizations have trained 5,000 responders around Rwanda to support distressed people. But Iyamuremye and his colleagues have learnt that the commemorations themselves can be therapeutic: they give people the opportunity to talk about their trauma and support each other.
And researchers have found that even people who weren’t alive during the genocide are suffering. “Intergenerational trauma is a challenge and a reality in Rwanda. This needs to be targeted with strong, strong interventions,” says Iyamuremye.
Trauma across generations
At the Rwanda Military Hospital on Kigali’s outskirts, Léon Mutesa, a physician and, for a long time, the nation’s only geneticist, is seeing mothers and babies at his paediatric clinic. Mutesa, who directs the Center for Human Genetics at the University of Rwanda, was the first to explore the effects of Rwandans’ trauma at the genetic level. As an undergraduate in the early 2000s, Mutesa saw that children born to women who had been pregnant in 1994 also exhibited signs of trauma. During commemorations, the children expressed symptoms such as PTSD, depression, anxiety and hallucinations from an event that they hadn’t experienced.
Inspired by studies of Holocaust survivors2, Mutesa devised a small study to investigate whether the trauma from the genocide had left epigenetic marks on individuals’ DNA through the addition of methyl groups to certain regions.
In that study3, conducted in 2012, Mutesa’s team sampled blood from women who were pregnant in 1994 and their children, as well as control participants who weren’t exposed to the genocide. The team found evidence that genocide survivors and their children bore similar epigenetic marks on certain sections of DNA.
Geneticist Léon Mutesa has studied DNA markings in genocide survivors and their children.Credit: AP Photo
Hoping to start a larger study, Mutesa collaborated with Stefan Jansen, a Belgian neuroscientist who had been at the University of Rwanda since 2011. In 2017, the pair, with US partners, won funding from the US National Institutes of Health to extend their investigations.
“We found that those mothers who were exposed had around 24 differentially methylated regions, which is really high compared to the control group,” says Clarisse Musanabaganwa, a medical research analyst at the RBC who was part of Mutesa and Jansen’s team. The team found that many of the methylated regions were the same in mothers and in the children that they were pregnant with during the genocide4,5. The research indicates a way in which trauma can transcend at least one generation, and the researchers suggest that lasting effects could be passed down through multiple generations through a mechanism of epigenetic inheritance.
But the idea of multigenerational epigenetic inheritance is controversial. Many scientists are sceptical about whether methylation marks on DNA in humans can be inherited.
“I’m not aware of any really convincing case where the transgenerational inheritance — inheritance of methylation patterns — has been demonstrated,” says Timothy Bestor, a molecular biologist in Gaylordsville, Connecticut, who holds an emeritus position at Columbia University in New York City.
But Mutesa and Jansen are seeing some practical benefits of their work. When the scientists discussed with study participants that their trauma could influence their children, they saw the participants’ resilience increase. For instance, if survivors’ children were performing poorly in school, parents now saw a possible reason. The researchers could support children with psychotherapy. “They could now understand why this is happening to their children,” says Mutesa.
Biological studies also have a broader importance, says Jansen. “We want to evidence that, and have that recorded for history: this is what happened.” The evidence helps to fight genocide denial, he says.
Beyond the epigenetic analyses, Jansen and his colleagues have strengthened methodological approaches to studying community mental health in Rwanda. These studies have informed research on conflicts elsewhere, such as in Iraq, says Jansen.
Lessons from Rwanda
The bulk of the research on the genocide in Rwanda has been in the social sciences and humanities — studying topics from reconciliation, peacebuilding and justice to the role of ethnic designations in a society after conflict. For instance, neighbouring Burundi, which experienced ethnic violence in a roughly decade-long civil war that started in 1993, chose to recognize ethnicities, whereas the Rwandan government eradicated formal ethnic distinctions after the genocide. In a global study6 that compared countries that had taken either approach after war, those that chose to recognize ethnic groups scored better on societal markers such as peace, democracy and economics.
Some of the skulls of people who were killed while seeking refuge at Ntarama in April 1994 are on display in the church.Credit: Nichole Sobecki/VII/Redux/eyevine
The growing literature on genocides has revealed that they have huge ramifications that extend well beyond the borders of the countries where they happen, say researchers.
“In terms of the scale of violence, the scale of disruption, the scale of suffering, they are enormously important events,” says Scott Straus, a political scientist at the University of California, Berkeley.
Studies had been conducted almost exclusively by Western scholars — although that’s starting to change. In the past decade, as discussions of decolonizing research began in academia, Clark started working with the UK-based Aegis Trust, which runs the Kigali Genocide Memorial. An analysis by Clark and his colleagues of 12 relevant journals showed that from 1994 to 2019, just 3.3% of studies on post-genocide Rwanda had been done by scholars from the nation (see go.nature.com/3qapae7). In 2014, with funding from the Swedish and UK development agencies, the Aegis Trust launched the Research, Policy and Higher Education (RPHE) programme, an effort to invite Rwandan scholars to submit research proposals.
“There are cultural nuances that have to be told by the very people that go through those experiences,” says Sandra Shenge, who is director of programmes at the Aegis Trust based at the Kigali Genocide Memorial, and former RPHE manager. The grants were modest — just £2,500 (US$3,150) each. But the response to the programme was amazing, says Shenge. The first call received more than 500 applications.
The aim was for Rwandan scholars to share their stories and for external researchers to provide support with advice on methodology, publishing and how best to disseminate results. These studies are collected in a resource called the Genocide Research Hub.
“The RPHE was the best thing that happened to Rwandan researchers,” says Munyurangabo Benda, a philosopher of religion at the Queen’s Foundation, an ecumenical college in Birmingham, UK. “It is the only space where Rwandan research has begun to have impact on policy.”
Photos of lives cut short by the 1994 killings are on display at the Kigali Genocide Memorial.Credit: Chris Jackson/Getty
Benda’s research7,8,supported by the RPHE, has already influenced policy. His project examined a state programme on reconciliation that had grown from a grassroots effort. His work exploring the guilt felt by children of Hutu people was inspired by the experience of his young nephew in Denmark, whose father was a Hutu. One day, his nephew’s class was studying the genocide in Rwanda and classmates asked him: “Were your family killers or survivors?” His nephew was traumatized.
The research helped to shape programmes that the Rwandan government offers for students of various ages, says Benda.
The RPHE programme also holds lessons for making the broader academic community more inclusive. According to Clark, “the problem is with journal editors and peer reviewers”, who often dismiss work from Rwanda and other countries because of preconceived ideas of quality based on where the work has been produced.
A theory of genocides
Another author whose work has been published through the Genocide Research Hub is sociologist Assumpta Mugiraneza9. From a hilltop office with views over Kigali, Mugiraneza runs an organization called the IRIBA Centre for Multimedia Heritage. Iriba means ‘source’ in Kinyarwanda, and the centre collects audio-visual archives of testimonies from the genocide and of life before 1994.
Mugiraneza says she started this work to capture Rwanda’s heritage, which was in danger of disappearing. The country’s historic oral traditions were eroded by colonization, which imposed reading and writing. As a result, Rwanda’s history is written without this richer heritage, says Mugiraneza. “Let’s go back to what we have in common: sound and image.”
Sociologist Assumpta Mugiraneza runs the IRIBA Centre for Multimedia Heritage.Credit: Carl De Keyzer/Magnum Photos
The centre, she says, is designed “to support the process of reappropriating the past”. To think about genocide, “we must dare to seek humanity where humanity has been denied”.
IRIBA’s work is extraordinary, says Zoe Norridge, who studies African literature and culture at King’s College London. “That’s the kind of work that can be done by Rwandan scholars in depth in a way that I think outsiders never really reach.”
Researchers agree that studying atrocities is a difficult undertaking. “Research involves talking to survivors who have endured unimaginable horror and putting yourself in the position to listen and hear and be empathetic,” says David Simon, who directs the Genocide Studies Program at Yale University in New Haven, Connecticut.
Still, scholars say that, through these studies, they are developing a broader understanding by identifying similarities among different genocides. These include what happened in Rwanda and the Holocaust, as well as in the genocide of the Armenian people in 1915 and of the Herero and Nama people in what is now Namibia, starting in 1904.
All of them shared common ingredients, according to researchers. The first is racializing members of society and identifying an ‘inferior’ segment of the population to be eliminated. Other factors include planning organized massacres and spreading an ideology across a whole society. The last component is the involvement of the state and its institutions, such as religious establishments and schools, as participants in the killings, says historian Vincent Duclert, who is France’s leading scholar on the 1994 genocide.
Studies in Rwanda helped to solidify the theory, says Duclert. “This pattern was really reinforced by the genocide of the Tutsi.”
Another lesson from Rwanda, say researchers, is the need to seek multiple narratives — from people inside and outside the region, and from perpetrators as well as survivors. “In 1994, and in the years immediately after, there was a very simple narrative about the Rwandan genocide being driven by ancient tribal hatreds, and that it almost explained itself away,” says Elisabeth King, who studies peace, conflict and education at New York University. Scholars, says King, have a crucial part to play in developing nuanced accounts of the complex political and social factors that underlie these events. Those explanations, in turn, can help researchers and others to understand why people commit atrocities, and could ultimately contribute to developing approaches that help to stop them.
Belongings of people killed at Ntarama, including identity cards, which showed people’s ethnicities.Credit: Ben Curtis/AP Photo/Alamy
Straus is also studying causal factors shared by different genocides, and why some conflicts that have the ingredients of genocide do not escalate into them — violence in Mali in the 1990s and Côte d’Ivoire in the early 2010s are two examples10.
Some scholars say that studying genocides can yield many benefits, but that stopping them from happening is ultimately a political matter decided by nations and international bodies.
Aggée Shyaka Mugabe, acting director of the Centre for Conflict Management at the University of Rwanda, is pessimistic about the extent to which studying genocides can ultimately stop them. “What we publish informs public policies,” says Mugabe, who studies transitional justice and peacebuilding11. But that doesn’t translate into something everyday people can understand, he adds.
Some have also raised concerns that it can be difficult for Rwandan researchers to study topics related to genocide freely, because of pressure from the government to follow a certain narrative on politically sensitive issues. But Mugabe rejects the idea that research done inside Rwanda isn’t useful because of the perceived political pressure. “Some of my papers have a critical aspect,” he says. “There is no police trying to tell me what to write or what not to write.”
Survivors’ stories
One concern among scholars is that there has been less focus on elevating the voices of survivors, given that judicial inquiries focused so much on perpetrators.
Jean Pierre Sagahutu is one of those survivors. “I can’t tell you everything that happened in 1994 because it’s too hard,” he says. “I remember everything as if it were yesterday,” he says. “It’s as if I’m seeing it now.” Sagahutu survived by hiding in a septic tank for more than two months. In that time, his father and mother were killed. Originally trained as an accountant, Sagahutu began driving taxis after the genocide and worked as a ‘fixer’ for people visiting the country for projects, often interviewing génocidaires, the perpetrators of the violence against the Tutsi. “Sometimes my ears hurt, but it made me understand what the people had really done. And in the end, it became therapy.”
In 2019, he met Duclert, whom French President Emmanuel Macron had commissioned to conduct a study on France’s role in the genocide, owing in part to the French government’s support of Rwanda’s pre-genocide Hutu government. In 2021, Duclert presented his 1,000-page report12, which concluded that French authorities saw evidence of a coming genocide as early as 1990 but didn’t take enough measures to stop it.
Sagahutu takes positives from Duclert’s report, but says that scholars have more work to do: “I’d like researchers to try to learn, to really dig and find out what the real causes of the genocide were,” he says. “Because the genocide was not a game of chance, it was something that had been well prepared for a long time.”
One of the most important tools for researchers is recording the testimony of survivors, says Yolande Mukagasana, who wrote the first comprehensive survivor’s account of the genocide, which was published in French in 199713. Mukagasana, now 69, has remained a writer and activist, and is determined to keep the memory of the genocide against the Tutsi alive. As part of her work, she has talked to survivors of other genocides and mass killings and she sees similarities in these events, regardless of where in the world they happened. “The ideology of hate is the same,” she says, adding that survivors experience “exactly the same suffering”.
Yolande Mukagasana wrote the first comprehensive account of the genocide by a survivor.Credit: Chris Schwagga
In 1994, Mukagasana was a nurse and a successful Tutsi woman who ran her own health clinic. When the killings started, Mukagasana and her husband separated, hoping that their three children would be safer with him. During the months of the genocide, in which she was protected by Hutu people, she began writing her testimony on scraps such as cigarette packets.
Mukagasana’s husband and children were killed. When she reached safety at the Hôtel des Mille Collines — featured in the 2004 film Hotel Rwanda — one of the first things she wanted was a pen and paper to record what had happened.
At IRIBA, Mugiraneza knows the importance of documenting the events of 1994. But she also strives to collect evidence of life before. “The marriages. The love songs. The buildings, the proverbs, the stories — all those things that are so magnificent but are seen as trivial.”
“People negotiate a space for thinking, for giving meaning to life — which allows us to better understand what extermination and death are.”
Artificial intelligence is not new. But rapid innovation in the last year now means consumers and businesses alike are more aware of the technology’s capabilities than ever before, and, are most likely using it themselves in some shape or form.
The AI revolution also has a downside: it empowers fraudsters. This is one of the most significant effects we’re witnessing, rather than more productivity and creativity in the workplace. The evolution of large language models and the use of generative AI is giving fraudsters new tactics to explore for their attacks, which are now of a quality, depth and scale that has the potential for increasingly disastrous consequences.
This increased risk is felt by both consumers and businesses alike. Experian’s 2023 Identity and Fraud Report found that just over half (52%) of UK consumers feel like they’re more of a target for online fraud now than they were a year ago, while over 50% of businesses report a high level of concern about fraud risk. It is vital that both businesses and consumers educate themselves on the types of attack that are happening, and what they can do to combat them.
Eduardo Castro
Managing Director ID&Fraud UK&I, Experian.
Getting familiar with the new types of fraud attack
There are two key trends emerging in the AI fraud space: the hyper-personalization of attacks, and the subsequent increase of biometric attacks. Hyper-personalization means that unsuspecting consumers are increasingly being scammed by targeted attacks that trick them into making instant transfers and real-time payments.
For businesses, email compromise attacks can now use generative AI to copy the voice or writing style of a particular company to make more genuine-looking requests such as encouraging them to carry out financial transactions or share confidential information.
Generative AI makes it easier for anyone to launch these attacks, by allowing them to create and manage many fake bank, ecommerce, healthcare, government, and social media accounts and apps that look real.
These attacks are only set to increase. Historically, generative AI wasn’t powerful enough to be used at scale to create a believable representation of someone else’s voice or a face. But now, it impossible to distinguish with a human eye or ear a deep-fake face or voice from a genuine one.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
As businesses increasingly adopt additional layers of controls for identity verification, fraudsters will need to exploit these types of attacks.
Types of attack to look out for
Types of attack to look out for include:
Mimicking a human voice: There has been substantial growth in AI-generated voices that mimic real people. These schemes mean consumers can be tricked into thinking they’re speaking to someone they know, while businesses that use voice verification systems for systems such as customer support can be misled.
Fake video or images: AI models can be trained, using deep-learning techniques, to use very large amounts of digital assets like photos, images and videos to produce high quality, authentic videos or images that are virtually indiscernible from the real ones. Once trained, AI models can blend and superimpose images onto other images and within video content at alarming speed.
Chatbots: Friendly, convincing AI chatbots can be used to build relationships with victims to convince them to send money or share personal information. Following a prescribed script, these chatbots can extend a human-like conversation with a victim over long periods of time to deepen an emotional connection.
Text messages: Generative AI enables fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic. They can then conduct multi pronged attacks via text-based conversations with multiple victims at once, manipulating them into carrying out actions that can involve transfers of money, goods, or other fraudulent gains.
Combatting AI by embracing AI
To fight AI, businesses will need to use AI and other tools such as machine learning to ensure they stay one step ahead of criminals.
Key steps to take include:
Identifying fraud with generative AI: Use of generative AI for fraudulent transaction screening or identity theft checks is proving more accurate at spotting fraud, compared to previous generations of AI models
Increasing use of verified biometrics data: Currently generative AI can replicate an individual’s retina, fingerprint, or the way someone uses their computer mouse.
Consolidating fraud-prevention and identity-protection processes: All data and controls must feed systems and teams that can analyze signals and build models that are continuously trained on good and bad traffic. In fact, knowing what a good actor looks like will help businesses prevent impersonation attempts of genuine customers.
Educating customers and consumers: Educating consumers in personalized ways through numerous communication channels proactively can help ensure consumers are aware of the latest fraud attacks and their role in preventing them. This helps enable a seamless, personalized experience for authentic consumers, while blocking attempts from AI-enabled attackers.
Use customer vulnerability data to spot signs of social engineering: Vulnerable customers are much more likely to fall for deep fake scams. Processing and using this data for education and victim protection will enable the industry to help the most at risk
Why now?
The best companies used a multi-layered approach – there is no single silver bullet – to their fraud prevention, minimizing as much as possible the gaps that fraudsters look to exploit. For example, by using fraud data sharing consortiums and data exchanges, fraud teams can share knowledge of new and emerging attacks.
A well layered strategy which incorporates device, behavioral, consortia, document and ID verification, drastically reducing the weaknesses in the system.
Combating AI fraud will now be part of that strategy for all businesses which take fraud prevention seriously. The attacks will become more frequent and sophisticated, requiring a long-term protection strategy – that covers every step in the fraud prevention process, from consumer to attacker – to be implemented. This is the only way for companies to protect themselves and their customers from the growing threat of AI-powered attacks.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Nvidia CEO Jensen Huang has clarified comments he made about the supposed “death of coding”.
Huang had been criticized in the past for saying on several occasions that as AI platforms would soon be doing a lot of the heavy lifting when it comes to coding, young people today should not necessarily consider learning it as a vital skill.
Speaking at the company’s Nvidia GTC 2024 event in San Jose, Huang was asked at a press Q&A if he still believed this was the case – and it seems not much has changed.
Death of coding?
“I think that people ought to learn all kinds of skills,” Huang said, comparing learning to code to skills such as juggling, playing piano or learning calculus.
However, he did add that, “programming is not going to be essential for you to be a successful person…but if somebody wants to learn to do so (program), please do – because we’re hiring programmers.”
In the past, Huang had said that time otherwise spent learning to code should instead be invested in expertise in industries such as farming, biology, manufacturing and education, and that upskilling could be a key way forward, helping provide the knowledge of how and when to use AI programming.
Huang did also add that generative AI would require a number of new skills in order to close the technology divide.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“You don’t have to be a C++ programmer to be successful,” he said. “You just have to be a prompt engineer. And who can’t be a prompt engineer? When my wife talks to me, she’s prompt engineering me.”
“We all need to learn how to prompt AIs, but that’s no different than learning how to prompt teammates.”
These skills could be vital for younger people entering the workforce at an auspicious time, Huang went on to add.
“It (AI) Is a new industry – that’s why we say there’s a new industrial revolution,” he declared, In the future, almost all of our computing will be generated.”
In today’s era of data-driven decision-making, the marriage of machine learning and open banking data is transforming financial services. In recent years, we’ve seen its successful applications across various domains, including enhancing fraud protection through the analysis of extensive datasets with sophisticated algorithms that identify patterns indicative of fraudulent activity.
The technologies have also played pivotal roles in algorithmic trading with real-time analysis of market trends, and in supporting regulatory compliance, where it has helped financial institutions in meeting and navigating complex regulatory requirements. The financial services industry has shown others that it is dynamic, and adopting evolving technologies has certainly played a core part of this evolution.
Now more than ever, machine learning and open banking data are also poised to shake up the lending landscape. The convergence of these technologies presents a real opportunity for lenders to better understand their customers, personalize their products as a result, and foster a more transparent and responsive lending ecosystem.
In this article, I delve into my thoughts on the three key ways machine learning technology is redefining the game for the lending industry, and where the opportunities are to offer mutually beneficial outcomes for both lenders and customers alike.
Nick Allen
Chief Technology Officer at Aro.
Marrying Open Banking data with machine learning
One prevailing trend that the lending industry can take advantage of is the increasing demand for personalized products from customers. The fusion of machine learning and open banking data is becoming a linchpin for how lenders engage with their customers, increase their satisfaction and build brand loyalty. Moreover, the pairing of the open banking data and machine learning algorithms enables lenders to gain unparalleled and deeper insights into customer profiles. With access to rich insight of approximately a hundred individual attributes (including data from utility payments, rental history, public records, spending habits etc.), lenders can assess their customers’ creditworthiness more accurately to customize financial products, ensuring that they respond to customers’ specific needs and financial capabilities. For example, this could lead to the introduction of credit options that are presently unavailable or even result in lower interest rates for customers who connect their data and demonstrate their sustainable affordability.
What’s more, the incorporation of open banking data introduces a layer of transparency and accuracy to the credit matching experience. Not only do borrowers benefit from a more holistic evaluation that goes beyond the archaic credit scoring approach, but they also get a fairer representation of their financial standing with real time and accurate data. This not only instils more confidence in the lending process, it also boosts financial inclusivity by offering opportunities for individuals who may have limited credit history, despite exhibiting responsible financial behaviors.
Improved personalized credit matching
The adoption of machine learning and its advantages should also extend beyond lenders’ internal operations. While borrowers now anticipate tailored offerings from lenders that align precisely with their unique financial requirements and capabilities, achieving this high degree of customization demands more than just implementing the latest cutting-edge technology. In fact, it requires a nuanced understanding of borrowers’ behaviors and preferences, emphasizing the importance of a customer-centric approach that goes beyond assessing surface-level data. Advanced machine learning algorithms are now capable of evaluating customer profiles against available financial offers, boosting offer acceptance and completion rates. This approach levels the playing field for lenders, keeping the best interests of customers at the forefront.
Before now, many customers were excluded from accessing credit services through no fault of their own. Thin credit files, bias affordability calculations and one-size-fits-all credit decisioning has left many unable to access credit they can afford, or matched with unsuitable products. Machine learning algorithms, however, bring objectivity and speed to this process. Notably, machine learning algorithms can streamline the loan application processes by rapidly analyzing open banking data to improve the overall customer experience and efficiency of lenders. For instance, with machine learning, credit decisions have gone from taking days, to a matter of hours.
In addition to individual credit assessments, machine learning algorithms empower lenders to stay ahead of dynamic market conditions. In particular, lenders with this technology can continually analyze market trends, customer preferences and other economic indicators in real time. These algorithms can be crucial to provide lenders with valuable insights for strategic decision making when it comes to developing products and risk management in times of economic downturn.
Empowering consumers to navigate the complexities of personal finance
The advantages of machine learning are not exclusive to lenders. It is also becoming a powerful tool to enhance financial literacy among customers. By analyzing their income and expenditure data, machine learning can provide customers with personalized insights into their financial health to highlight what they can afford, and ultimately enable them to make more informed borrowing decisions.
Financial literacy is the cornerstone of a responsible lending environment. As customers gain insights into what they can afford, they become more aware of their financial capabilities and potential risks. Machine learning, in this context, acts as an educational guide, promoting transparency and responsible borrowing practices. The result is a customer base that is more financially savvy and less susceptible to pitfalls associated with uninformed financial decisions.
Entering a new frontier of optimized credit matching
As these innovative approaches continue to gain traction in financial services, the integration of machine learning and open banking data is expected to bring about a more efficient and customer-centric lending ecosystem. Lenders equipped with a robust machine learning approach are those who will better serve their clients, offering tailored solutions, while customers gain the ability to make more informed financial choices, fostering a responsible and transparent lending ecosystem.
In the coming years, the marriage between machine learning and open banking data will continue to evolve further to unlock new possibilities for the lending sector and broader financial services industry. It’s an exciting time for the lending industry and with a focus on customer-centricity and the responsible use of data, we’ll see the lending landscape undergo welcomed change from lenders and consumers alike.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In today’s fast-paced digital world, the ability to learn quickly and efficiently is more valuable than ever. ChatGPT, a cutting-edge language model, has emerged as a powerful tool in the arsenal of learners across various domains. If you’re looking to leverage this technology to enhance your learning experience, you’re in the right place. This article will guide you through the key strategies to unlock rapid learning with ChatGPT, ensuring that you can absorb and apply new knowledge with unprecedented speed.
Understanding ChatGPT’s Capabilities
First and foremost, it’s essential to grasp what ChatGPT is and its capabilities. ChatGPT is a state-of-the-art language processing AI developed to understand and generate human-like text based on the input it receives. This makes it an invaluable resource for learners seeking to clarify concepts, practice language skills, or even generate ideas for projects and research.
Strategies for Effective Learning
Interactive Learning: One of the standout features of ChatGPT is its interactive nature. Unlike static learning materials, ChatGPT can engage in a two-way conversation, allowing you to ask questions, seek clarifications, and explore topics in depth. This dynamic interaction fosters a deeper understanding and retention of information.
Customized Content: ChatGPT can tailor its responses based on your input, making it possible to receive information that matches your current level of understanding and interest. This personalized approach ensures that you’re not overwhelmed by complexity or bored by simplicity, striking the perfect balance for effective learning.
Practice and Application: Learning is not just about absorbing information; it’s also about applying it. ChatGPT can assist in this by providing examples, simulations, and quizzes tailored to the topic at hand. This practical application helps reinforce learning and improve retention.
Language Learning: For those looking to learn a new language, ChatGPT can be a conversational partner available 24/7. Its ability to understand and generate text in multiple languages makes it an excellent tool for practicing reading, writing, and even conversational skills in a target language.
Research and Idea Generation: ChatGPT can serve as a brainstorming partner, helping you to explore topics, gather information, and generate ideas for projects, essays, or studies. Its vast knowledge base and ability to process information can provide you with a starting point for research and creativity.
Maximizing the Benefits
To truly unlock rapid learning with ChatGPT, it’s important to approach the tool with a clear goal and open mind. Here are a few tips to maximize its benefits:
Be specific with your queries to get the most relevant and precise information.
Don’t hesitate to ask follow-up questions or for examples to clarify complex concepts.
Use ChatGPT as a supplement to traditional learning methods, not as a replacement. Combining resources can provide a more rounded and comprehensive understanding.
Practice regularly. The more you interact with ChatGPT, the better it becomes at understanding your learning style and needs, making the process more efficient over time.
Embracing the Future of Learning
ChatGPT represents a significant step forward in the realm of digital learning tools. Its ability to provide personalized, interactive, and accessible learning experiences makes it a game-changer for students, professionals, and lifelong learners alike. By understanding how to effectively utilize ChatGPT, you can unlock rapid learning, making the process more enjoyable and effective than ever before. Whether you’re diving into a new subject, mastering a skill, or exploring creative ideas, ChatGPT can support your journey every step of the way.
Remember, the key to leveraging ChatGPT lies in understanding its capabilities and integrating it into your learning strategy. With the right approach, this powerful tool can significantly enhance your ability to learn and apply new information quickly and efficiently. So, embrace this technology and watch as your learning process transforms, opening doors to new possibilities and opportunities.
Source & Image Credit: A Better Life
Filed Under: Guides
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.