AlgoBharat, la unidad india de la Fundación Algorand con sede en Singapur, se está preparando para lanzar la segunda edición de su iniciativa centrada en Web3 en India. Apodado “Pathway to Impact”, el programa tiene como objetivo reunir a desarrolladores Web3 y equipos de startups para competir y recibir tutoría y talleres a nivel de la industria. Según AlgoBharat, este compromiso con el talento indio de Web3 tiene como objetivo mejorar la infraestructura digital de la India. La primera versión de esta iniciativa se lanzó en 2023.
Nikhil Varma, director de tecnología, India Fundación AlgorandUn estudio reveló que ciudades como Surat y Trivandrum se han convertido en importantes centros para los desarrolladores que trabajan en la creación de soluciones blockchain adaptadas a las industrias locales. Según Varma, los desarrolladores indios están explorando una variedad de casos de uso, con un fuerte enfoque en la gestión de la cadena de suministro, la sostenibilidad, la atención sanitaria y la financiación de las PYME.
“La iniciativa Path to Impact se basa en una filosofía de compromiso profundo y sostenible para ayudar a los desarrolladores a desarrollar y comercializar sus habilidades”, dijo la compañía en su comunicado.
Los participantes de este programa competirán por recompensas financieras y créditos ALGO para respaldar la implementación de la red principal. La recompensa del primer premio es de 10.000 dólares (alrededor de 8,3 rupias lakhs) y 2000 Algo en créditos de la red principal. Información sobre el sitio presentado.
En la segunda edición de Road to Impact, Algobharat Google decidió agregar una “Pista de Desarrolladores” para mejorar las habilidades de los desarrolladores y comercializar sus conocimientos para satisfacer la demanda de los creadores de Web 3 en todo el mundo. Los 10 principales ganadores de la iniciativa Developer Path ganarán recompensas en efectivo, siendo el premio máximo de 1.000 dólares (alrededor de 8,3 rupias lakh) y 100 algoritmos en forma de créditos para respaldar la implementación de la red principal.
“Los estudios muestran que el grupo de indios Desarrolladores de cadenas de bloques “La adopción de la tecnología Blockchain ha aumentado del tres por ciento en 2018 al 12 por ciento el año pasado. Con el objetivo de construir un ecosistema de soluciones blockchain escalables, sostenibles y realistas, el camino hacia el impacto de AlgoBharat está en línea con la visión de la India para 2047 de fomentar una economía digital. sociedad empoderada “, impulsando el crecimiento económico, abordando desafíos sociales y mejorando el liderazgo global en tecnología”, agregó el equipo de AlgoBharat.
La Fundación AlgoBharat ha estado llegando a personas y proyectos elegibles en Indore, Surat, Delhi, Trivandrum, Pune, Bangalore, Hyderabad y Kolkata desde agosto.
Está previsto que el programa comience en la Conferencia Algorand India en Hyderabad los días 7 y 8 de diciembre de este año, que se espera que reúna a desarrolladores, empresarios, directores ejecutivos, inversores, funcionarios de políticas y otros líderes de opinión de la India bajo un mismo techo.
Algorand, a través de su iniciativa AlgoBharat, participa activamente en el ecosistema Web3 en India desde hace algún tiempo. En abril de 2023, el presidente de AlgoBharat era Anil Kakni. el dijo Gadgets 360 dijo que la plataforma quiere ayudar a elevar el perfil de la India en el fomento del talento Web3. Más tarde, AlgoBharat también anunció Fuerzas conjuntas Con el Gobierno de Telangana para presentar a los agricultores prácticas agrícolas ecológicas a través de la tecnología Blockchain.
Se contrató a varios astrónomos para que actuaran como consultores sobre “impactos profundos”, asegurando que la ciencia de los cometas fuera lo más precisa posible. El equipo incluía a Gene Shoemaker, codescubridor del cometa Shoemaker-Levy 9, los astronautas David Walker, Chris Lucchini y Joshua Colwell, profesor de física de la Universidad de Florida Central. Colwell señaló que:
“No es difícil ser científicamente más preciso que la mayoría de las películas de ciencia ficción. […] [The] El director, los productores y los escritores tomaron la decisión de hacer la película lo más realista posible sin dejar de ser fieles a la historia que estaban contando. […] La película describe un intento de desviar el cometa y también crear un “barco” subterráneo para albergar a un gran número de personas y sobrevivir a los efectos catastróficos y a largo plazo de la colisión. […] “Ambas actividades son plausibles, pero ambas requieren enormes recursos y mucho tiempo para llevarse a cabo”.
Colwell y otros asesores también se aseguraron de que la superficie del cometa pareciera correcta y de que tuviera un tamaño científicamente preciso; En el caso de la película, tenía siete millas de ancho. También querían asegurarse de que el impacto pareciera el impacto de un cometa real, plantearon la hipótesis de lo que sucedería con los océanos de la Tierra (hay un maremoto masivo que se traga la ciudad) y estaban ansiosos por demostrar que cualquier astronauta que visitara un cometa lo haría. Permanece ingrávido cuando está cerca de su superficie.
Para tu información, cualquier cuerpo celeste que tenga gravedad será necesariamente esférico, gracias a un proceso llamado ajuste isostático. El objeto esférico más pequeño con gravedad propia en el sistema solar es Mimas, la séptima luna más grande de Saturno, con un diámetro de aproximadamente 400 kilómetros. Mientras tanto, el cometa más grande conocido es el C/2014 ONU271Que tiene sólo alrededor de 1,2 millas de diámetro.
Climate litigation is in the spotlight again after a landmark decision last week. The top European human-rights court deemed that the Swiss government was violating its citizens’ human rights through its lack of climate action. The case, brought by more than 2,000 older women, is one of more than 2,300 climate lawsuits that have been filed against companies and governments around the world (see ‘Climate cases soar’).
But does legal action relating to climate change make a difference to nations’ and corporations’ actions? Litigation is spurring on governments and companies to ramp up climate measures, say researchers.
‘Truly historic’: How science helped kids win a landmark climate trial
“There are a number of notable climate wins in court that have led to action by governments,” says Lucy Maxwell, a human-rights lawyer and co-director of the Climate Litigation Network, a non-profit organization in London.
Nature explores whether lawsuits are making a difference in the fight against global warming.
What have climate court cases achieved?
One pivotal case that spurred on change was brought against the Dutch government in 2013, by the Urgenda Foundation, an environmental group based in Zaandam, the Netherlands, along with some 900 Dutch citizens. The court ordered the government to reduce the country’s greenhouse-gas emissions by at least 25% by 2020, compared with 1990 levels, a target that the government met. As a result, in 2021, the government announced an investment of €6.8 billion (US$7.2 billion) toward climate measures. It also passed a law to phase out the use of coal-fired power by 2030 and, as pledged, closed a coal-production plant by 2020, says Maxwell.
Source: Grantham Research Institute/Sabin Center for Climate Change Law
In 2020, young environmental activists in Germany, backed by organizations such as Greenpeace, won a case arguing that the German government’s target of reducing greenhouse-gas emissions by 55% by 2030 compared with 1990 levels was insufficient to limit global temperature rise to “well below 2 ºC”, the goal of the 2015 Paris climate agreement. As a result, the government strengthened its emissions-reduction target to a 65% cut by 2030, and set a goal to reduce emissions by 88% by 2040. It also brought forwards a target to reach ‘climate neutrality’ — ensuring that greenhouse-gas emissions are equal to or less than the emissions absorbed from the atmosphere by natural processes — by 2045 instead of 2050. “In the Netherlands and Germany, action was taken immediately after court orders,” says Maxwell.
In its 2022 report, the Intergovernmental Panel on Climate Change acknowledged for the first time that climate litigation can cause an “increase in a country’s overall ambition to tackle climate change”.
“That was a big moment for climate litigation, because it did really show how it can impact states’ ambition,” says Maria Antonia Tigre, director of the Sabin Center for Climate Change Law at Columbia University in New York City.
What about cases that fail?
Cases that fail in court can be beneficial, says Joana Setzer at the Grantham Research Institute on Climate Change and the Environment at the London School of Economics and Political Science.
In a 2015 case called Juliana v. United States, a group of young people sued the US government for not doing enough to slow down climate change, which they said violated their constitutional right to life and liberty. “This is a case that has faced many legal hurdles, that didn’t result in the court mandating policy change. But it has raised public awareness of climate issues and helped other cases,” says Setzer.
One lawsuit that benefited from the Juliana case was won last year by young people in Montana, says Setzer. The court ruled that the state was violating the plaintiffs’ right to a “clean and healthful environment”, by permitting fossil-fuel development without considering its effects on the climate. The ruling means that the state must consider climate change when approving or renewing fossil-fuel projects.
What happens when people sue corporations?
In a working paper, Setzer and her colleagues found that climate litigation against corporations can dent the firms’ share prices. The researchers analysed 108 climate lawsuits filed between 2005 to 2021 against public US and European corporations. They found case filings and court judgments against big fossil-fuel firms, such as Shell and BP, saw immediate drops in the companies’ overall valuations and share prices. “We find that, especially after 2019, there is a more significant drop in share prices,” says Setzer. “This sends a strong message to investors, and to the companies themselves, that there is a reputational damage that can result from this litigation,” she says.
In an analysis of 120 climate cases, to be published on 17 April by the Grantham Research Institute, Setzer’s team found that climate litigation can curb greenwashing in companies’ advertisements — this includes making misleading statements about how climate-friendly certain products are, or disinformation about the effects of climate change. “With litigation being brought, companies are definitely communicating differently and being more cautious,” she says.
What’s coming next in climate litigation?
Maxwell thinks that people will bring more lawsuits that demand compensation from governments and companies for loss and damage caused by climate change. And more cases will be focused on climate adaptation — suing governments for not doing enough to prepare for and adjust to the effects of climate change, she says. In an ongoing case from 2015, Peruvian farmer Saúl Luciano Lliuya argued that RWE, Germany’s largest electricity producer, should contribute to the cost of protecting his hometown from floods caused by a melting glacier. He argued that planet-heating greenhouse gases emitted by RWE increase the risk of flooding.
More cases will be challenging an over-reliance by governments on carbon capture and storage (CCS) technologies — which remove carbon dioxide from the atmosphere and store it underground — in reaching emissions targets, says Maxwell. But CCS technologies have not yet proved to work at a large scale. For instance, in February, researchers criticized the European Union for relying too much on CCS in its plans to cut greenhouse-gas emissions by 90% by 2040 compared with 1990 levels.
“There is a tendency now for companies and governments to say, we’ll use carbon capture, we’ll find some technology,” says Setzer. “In the courts, we’ll start seeing to what extent you can count on the future technologies, to what extent you really have to start acting now.”
What about lower-income countries?
There will also be more climate cases filed in the global south, which generally receive less attention than those in the global north, says Antonia Tigre. “There is more funding now being channelled to the global south for bringing these types of cases,” she says. This month, India’s supreme court ruled that people have a fundamental right to be free from the negative effects of climate change.
Last week’s Swiss success demonstrates that people can hold polluters to account through lawsuits, say researchers. “Litigation allows stakeholders who often don’t get a seat at the table to be involved in pushing for further action,” says Antonia Tigre.
Maxwell thinks that the judgment will influence lawsuits worldwide. “It sends a very clear message to governments,” she says. “To comply with their human rights obligations, countries need to have science-based, rapid, ambitious climate action.”
The AI rollercoaster of expectations and concerns continues to twist at breakneck speeds as enterprises inch ever closer to understanding the rapidly changing technology and its possible functions within their business. Most recently, advanced artificial intelligence platforms such as generative AI and large language models (LLMs) have fallen under scrutiny for their voracious energy consumption and consequent ecological impact, with some researchers hypothesizing that LLMs consume hundreds of liters of freshwater and produce annual emissions equivalent to that of a small country.
With global warming exceeding 1.5 degrees across an entire year for the first time, global stakeholders are questioning where the bulk of responsibility should lie in preventing the climate crisis from worsening. Climate change remains an issue of critical importance for consumers and companies alike amidst these global efforts to reduce CO2 emissions, boding poorly for the public image of any company that uses consumptive AI tools without keeping their carbon footprint in check. More importantly, rampant unchecked AI use could have disastrous consequences for the environment – research from MIT suggests that training just , which has the potential to significantly counteract global progress in combatting climate change.
Despite the apparent ecological apathy of recent legislation like the EU AI act and President Biden’s executive order, which focus largely on other facets of AI responsibility, some major AI players have begun to proactively self-regulate and work towards sustainable AI use. Here are ways in which the leaders in artificial intelligence are approaching AI with ecological consciousness, while preserving the profound business value of the technology.
Maxime Vermeir
Senior Director of AI Strategy, ABBYY.
Purpose-built AI
Many drawbacks of generative AI and LLMs stem from the massive stores of data that must be navigated to yield value. Not only does this raise risks in the way of ethics, accuracy, and privacy, but it grossly exacerbates the amount of energy required to use the tools.
Instead of highly general AI tools, enterprises have begun to pivot to narrower purpose-built AI, specialized for specific tasks and goals. For example, ABBYY has adopted this approach by training its machine learning and natural language processing models to specifically read and understand documents that run through enterprise systems just like a human. With pre-trained AI skills to process highly specific document types with 95% accuracy, organizations can save trees by eliminating the use of paper while also reducing the amount of carbon emitted through cumbersome document management processes.
Empowering developers
AI companies don’t need to shoulder the burden of sustainable AI all on their own – some are proactively putting the proverbial ball in the court of developers.
OpenAI, the artificial intelligence pioneer responsible for the widely popular ChatGPT, recently announced that developers can create their own “GPT” platforms for specialized purposes. This allows developers and organizations to narrow their AI use with a high degree of customizability, trimming away excessive features and data that amplify ecological damage. For example, developers could design GPTs for purposes limited to creative writing advice, cooking information, tech support, or any other niche purpose.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Considering the increased risks for inaccuracy and privacy infringement associated with highly general AI models, developers will likely be motivated to take advantage of these narrower, more specialized GPT platforms not just for ecological responsibility, but for improved business outcomes as well.
Sustainable business practices
Companies should also take a step back from the technology itself, and look inside their organization for more ways to sustainably leverage AI. For example, Microsoft revealed that their AI-supporting hardware runs exclusively on clean energy, absolving them of creating so-called “operational emissions.”
Moreover, companies can use AI as a tool to explore other facets of their business in which sustainability could be prioritized. Forrester highlights the measurement, reporting, and data visualization capabilities of artificial intelligence to suggest that it could power a climate revolution of its own.
Although objectively important, emissions aren’t the only metric used to encompass ecological impacts – studies have shown that a combination of robotics and AI have reduced herbicide use in some contexts by 90%. As companies continue to grapple with the utility and consequences of AI, they must explore the full breadth of its capability to enhance and contribute to sustainability.
Enterprises pick up the slack
So far, early AI legislation has largely failed to reign in the ecological implications of artificial intelligence, focusing instead on privacy and other ethical areas. While these areas are also crucial for responsible AI use, enterprises must keep themselves accountable in how they leverage artificial intelligence to generate business value.
2023 may have been a year of hype, noise, expectations, and misconceptions surrounding artificial intelligence, but the maturity that enterprises have accrued over the past year have given them the means necessary to make informed and responsible decisions regarding AI use. Still, it’s wise to scrutinize, question, and hold large organizations accountable for their carbon footprint and other impacts on the environment – those who priorities ecological responsibility should have nothing to hide.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Developers are under the gun to generate code faster than ever – with constant demands for greater functionality and seamless user experience – leading to a general deprioritization of cybersecurity and inevitable vulnerabilities making their way into software. These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and unencrypted data.
This pain point has existed for decades, however, artificial intelligence (AI) is poised to lend considerable support here. A growing number of developer teams are using AI remediation tools to make suggestions for quick vulnerability fixes throughout the software development lifecycle (SDLC).
Such tools can assist the defense capabilities of developers, enabling an easier pathway to a “security-first” mindset. But – like any new and potentially impactful innovation – they also raise potential issues that teams and organizations should explore. Here are three of them, with my initial perspectives in response:
Pieter Danhieux
Co-Founder and CEO, Secure Code Warrior.
No. If effectively deployed, the tools will allow developers to gain a greater awareness of the presence of vulnerabilities in their products, and then create the opportunity to eliminate them. Yet, while AI can detect some issues and inconsistencies, human insights are still required to understand how AI recommendations align with the larger context of a project as a whole. Elements like design and business logic flaws, insight into compliance requirements for specific data and systems, and developer-led threat modeling practices are all areas in which AI tooling will struggle to provide value.
In addition, teams cannot blindly trust the output of AI coding and remediation assistants. “Hallucinations,” or incorrect answers, are quite common, and typically delivered with a high degree of confidence. Humans must conduct a thorough vetting of all answers – especially those that are security-related – to ensure recommendations are valid, and to fine-tune code for safe integration. As this technology space matures and sees more widespread use, inevitable AI-borne threats will become a significant risk to plan for and mitigate.
Ultimately, we will always need the “people perspective” to anticipate and protect code from today’s sophisticated attack techniques. AI coding assistants can lend a helping hand on quick fixes and serve as formidable pair programming partners, but humans must take on the “bigger picture” responsibilities of designating and enforcing security best practices. To that end, developers must also receive adequate and frequent training to ensure they are equipped to share the responsibility for security.
Training needs to evolve to encourage developers to pursue multiple pathways for educating themselves on AI remediation and other security-enhancing AI tools, as well as comprehensive, hands-on lessons in secure coding best practices.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
It is certainly handy for developers to learn how to use tools that enhance efficiency and productivity, but it is critical that they understand how to deploy them responsibly within their tech stack. The question we always need to ask is, how can we ensure AI remediation tools are leveraged to help developers excel, versus using them to overcompensate for lack of foundational security training?
Developer training should also evolve by implementing standard measurements for developer progress, with benchmarks to compare over time how well they’re identifying and removing vulnerabilities, catching misconfigurations and reducing code-level weaknesses. If used properly, AI remediation tools will help developers become increasingly security-aware while reducing overall risk across the organization. Moreover, mastery of responsible AI remediation will be seen as a valuable business asset and enable developers to advance to new heights with team projects and responsibilities.
The software development landscape is changing all the time, but it is fair to say that the introduction of AI assistive tooling into the standard SDLC represents a rapid shift to essentially a new way of working for many software engineers. However, it perpetuates the same issue of introducing poor coding patterns that can potentially be exploited quicker, and at greater volume, than at any other time in history.
In an environment operating in a constant state of flux, training must keep pace and remain as fresh and dynamic as possible. In an ideal scenario, developers would receive security training that mimics the issues faced in their workday, in the formats that they find most engaging. Additionally, modern security training should place emphasis on secure design principles, and account for the deep need to employ critical thinking to any AI output. That, for now, remains the domain of a highly skilled security-aware developer who knows their codebase better than anyone else.
It all comes down to innovation. Teams will thrive with solutions that expand the visibility of issues and resolution capabilities during the SDLC, yet do not slow down the software development process.
AI cannot step in to “do security for developers,” just as it’s not entirely replacing them in the coding process itself. No matter how many more AI advancements emerge, these tools will never deliver 100 percent, foolproof answers about vulnerabilities and fixes. They can, however, perform critical roles within the greater picture of a total “security-first” culture – one that depends equally on technology and human perspectives. Once teams undergo required training and on-the-job knowledge-building to reach this state, they will indeed find themselves creating products swiftly, effectively and safely.
It must also be said that, similar to online resources like Stack Overflow or Reddit, if a programming language is less popular or common, this will be reflected in the availability of data and resources. You’re unlikely to struggle to find answers to security questions in Java or C, but data may be lacking or conspicuously absent when trying to solve complex bugs in COBOL or even Golang. LLMs are trained on publicly available data, and they are only as good as the dataset.
This is, again, a key area in which security-aware developers fill a void. Their own hands-on experience with more obscure languages – coupled with formal and continuous security learning outcomes – should help fill a distinct knowledge gap and reduce the risk of implementing AI output on faith alone.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In the digital landscape, where identities are woven into every aspect of our online interactions, the emergence of AI-driven deepfakes has become a disruptive force, challenging the very essence of identity verification. In navigating this ever-evolving terrain, CIOs and IT leaders must dissect the intricate interplay between emerging technologies and their profound impact on the integrity of identity management processes.
Online identity verification today consists of two key steps. Firstly, a user being asked to take a picture of their government-issued identity document, which is inspected for authenticity. And secondly, the user being asked to take a selfie, which is biometrically compared to the picture on the identity document. Traditionally only used in regulated know-your-customer (KYC) use cases such as online bank account opening, identity verification is now used in a range of contexts today from interactions with government services, preserving the integrity of online marketplace platforms, employee onboarding, and improving security during password reset processes.
Subversion of the identity verification process through fraudulent identity presentation, for example by using a deepfake of an individual to defeat the selfie step, thus introduces considerable risk to an organization.
Akif Khan
1. Mechanisms to subvert deepfake attacks
As attackers leverage the relentless progress of GenAI to craft increasingly convincing deepfakes, CIOs and IT leaders must adopt a proactive stance, bolstering their defenses with a multifaceted approach. Key to this is ensuring that your identity verification vendor deploys robust liveness detection.
This capability is deployed during the second step when the selfie is being taken, to check whether the selfie is in fact being taken of a live person who is genuinely present during the interaction. This liveness detection can be active, in which a user responds to a prompt such as turning their head, or it may be passive, in which subtle features such as micro movements or depth perspective are assessed without the user having to move.
The integration of active and passive liveness detection techniques, coupled with additional signals indicative of an attack, offers a holistic defense framework against evolving deepfake attacks. Such additional signals that can indicate an attack can be revealed using device profiling, behavioral analytics and location intelligence. Identity verification vendors may develop some of these capabilities natively, or use partners to deliver them, but they should be packaged up as a single solution for you to deploy.
2. Leveraging GenAI to improve identity verification
The versatility of GenAI presents intriguing opportunities for defense against deepfake attacks. By leveraging GenAI’s ability to develop synthetic datasets, product leaders can reverse-engineer attack variants and fine-tune their algorithms for improved detection rates. Beyond cybersecurity applications, GenAI can also address issues of demographic bias in face biometrics processes.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Traditional methods of obtaining diverse training datasets pose challenges in terms of cost and effort, often resulting in biased machine-learning algorithms. However, the creation of deepfake images using GenAI offers a solution by generating large datasets of synthetic faces with artificially elevated levels of training data for underrepresented demographic groups. This not only reduces the barriers to obtaining diverse data sets but also helps minimize bias in biometric processes. Challenge your identity verification vendors as to whether they are innovating and using GenAI for positive purposes, not just treating it as a threat.
Select vendors who have embraced this new world and taken proactive measures such as introducing bounty programs to challenge hackers to defeat liveness detection processes. By incentivizing individuals to identify and report potential vulnerabilities, vendors and hence organizations can bolster their defensive capabilities against deepfake attacks.
As we chart a course towards a secure digital future, collaboration emerges as the cornerstone of our collective defense against deepfake adversaries. By fostering dynamic partnerships and cultivating a culture of vigilance, CIOs and IT leaders can forge a resilient ecosystem that withstands the relentless onslaught of AI-driven deception. Armed with insight, innovation, and a steadfast commitment to authenticity, look to embark on a journey towards a future where identities remain inviolable in the face of technological upheaval.
Link!
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Siri Eldevik Håberg studies whether environmental factors such as smoking are linked to subtle changes to the human genome.Credit: Fredrik Naumann/Panos Pictures for Nature
As a medical student, Siri Eldevik Håberg became fascinated with how the health of a baby can be affected during pregnancy. Smoking, for example, is a proven risk factor for respiratory infection in fetuses — a finding supported by one of Håberg’s earliest studies, which scoured data from tens of thousands of births in Norway to investigate outcomes for a small subset of women who had smoked during, but not after, pregnancy1. The analysis was based on data from the Mother, Father, and Child Cohort Study (MoBa) at the Norwegian Institute of Public Health (NIPH) in Oslo, which today holds biological samples and survey information for nearly 300,000 participants.
Nature Index 2024 Health sciences
Håberg conducted her postdoctoral work in the United States, where she joined a group at the National Institute of Environmental Health Sciences in Durham, North Carolina. She contributed data analysis to a team that examined 1,062 blood samples from MoBa, drawn from the umbilical cord at the delivery of a baby, and identified 10 genes that were altered in infants born to women who smoked while pregnant. The 2012 study provided important evidence for how non-heritable smoking exposure can cause certain epigenetic effects — subtle changes to the genome that impact the reading of DNA but do not alter the DNA sequence2. “We are only beginning to understand the gravity of epigenetic changes during development,” says Håberg.
Now, as director of the Centre for Fertility and Health at the NIPH, Håberg is investigating ways to combine MoBa data with statistics from Norwegian registries on factors such as vaccinations, prescriptions, education and economic status. In one project, she and her colleagues matched babies from the 2012 study with data collected by the Medical Birth Registry of Norway and found that reduced birth weight was strongly correlated with smoking during pregnancy3.
Having investigated the effects of smoking on fetal health, Håberg was interested in other factors that could cause epigenetic changes linked to development. In a 2022 study published by Nature Communications4, she and her co-authors compared rates of DNA methylation — a process that affects levels of gene expression — for almost 2,000 MoBa newborns. Roughly half of the babies were conceived naturally and half through reproductive technologies such as in vitro fertilization. Even after controlling for the parents’ DNA methylation rates, differences were found in more than 100 genes, including those related to growth and development. The findings might pave the way for big-data approaches to studies related to reproductive technologies.
Håberg is passionate about connecting specialists from her team with interdisciplinary groups from around the world so that they can explore large amounts of data that hold clues about fetal health. One such project is comparing MoBa data with information from the Danish National Birth Cohort. “It all comes down to finding exciting new ways for teams of specialists to work together,” she says. “It’s great to see so many resources dedicated to questions of early embryonic development.” — Amy Coombs
NARMIN GHAFFARI LALEH: Deeper vision
Narmin Ghaffari Laleh.Credit: Courtesy of Narmin Ghaffari Laleh
As a university student studying medical photonics in Jena, Germany, Narmin Ghaffari Laleh was inspired to use her programming skills to help patients and doctors. She sought work experience at local medical-device company, Carl Zeiss Meditec, to explore the use of artificial intelligence (AI) in improving medical-image analysis. Her work there concentrated on eye imaging, where conventional methods of analysis use systems that read each row of pixels, identifying features such as the cornea, lens and retina by tracking their colours and the distance between them. Common variables such as glasses can throw such systems off, however. “These kinds of programs work well until someone puts on glasses or contact lenses and takes a photo,” says Ghaffari Laleh, who was a master’s student at Fredrich Schiller University of Jena at the time.
The model developed by Ghaffari Laleh and her colleagues at the company used deep learning — a machine-learning technique that can identify complex patterns. In testing, their system analysed images with variables such as glasses with greater accuracy and less human oversight than previous methods. “I saw the potential for this sort of program to impact other areas of medicine, because the machine-learning techniques were rapidly becoming more sophisticated and could handle more data, all without the traditional human reviewer,” says Ghaffari Laleh, who built on these findings in her 2020 master’s thesis.
Ghaffari Laleh began her PhD at RWTH Aachen University in Aachen, Germany, in the field of computational pathology — an emerging area of research that aims to improve patient care by using advances in AI and big data. Her focus was on developing systems that can more accurately and efficiently identify visual indicators of cancer and other diseases than methods that rely solely on human specialists. These systems could be particularly useful in the analysis of tissue samples that have been prepared for microscope slides and stained with the widely used haematoxylin and eosin (H&E) dye, which turns cell structures different shades of purple, blue and pink, she says.
In 2022, Ghaffari Laleh co-authored a paper5 describing how AI could consistently categorize tumours in kidney-tissue slides. “With deep learning, we can detect patterns that the human eye cannot see,” she says.
For a separate study6, the team showed how AI trained to identify mutations in a protein associated with bladder cancer could outperform a uropathologist in analysing tissue samples stained with H&E. “We do not aim to replace the urologist, but deep-learning can offer additional analysis,” says Ghaffari Laleh.
To test whether these methods can move to clinical applications, Ghaffari Laleh dedicated her PhD thesis to investigating how applicable these kinds of AI systems could be to a variety of diseases and patient demographics. Her dissertation is pending defence in March.
Ghaffari Laleh hopes to apply her skills to help medical professionals in developing countries who cannot afford to run advanced diagnostics and who struggle to recruit and train skilled professionals. “AI is a much more affordable option,” she says. “If a deep-learning model can analyse data from diverse patient groups from a wide range of countries, then hospitals that lack resources can ship samples for diagnosis.” She’s also working on AI that can read text7, ultrasound and radiology image data, with hopes that they can speed up the work of doctors and other specialists worldwide. — Amy Coombs
TAL PATALON: Prolific polymath
Tal Patalon.Credit: Asaf Brenner
Tal Patalon prides herself on being able to pivot her work to where she thinks her expertise, and that of her team, will be most effective. “For me, it’s all about clinical impact,” she says. As head of Kahn-Sagol-Maccabi (KSM) in Tel Aviv — the research and innovation centre of Maccabi Healthcare Services, one of Israel’s largest health-care providers — Patalon is interested in a range of medical conditions, including parvovirus, mpox, cancer and coeliac disease.
Having the capacity to launch research projects quickly proved invaluable to Patalon and her team during the COVID-19 pandemic, when global treatment and vaccination protocols changed rapidly to keep up with the evolution of the disease. In 2021, as the highly contagious Delta wave was surging through Israel, Patalon co-led a team that scoured the health records of almost 125,000 Israelis, charting coronavirus incidence, symptoms and hospitalization rates over three months.
The team discovered that vaccinated people who had not previously tested positive for COVID-19 were 13 times more likely to be infected by the new variant, compared with previously infected individuals who were unvaccinated. The results showed that the SARS-CoV-2 virus that causes COVID-19 confers a natural immunity to those who have been infected, providing valuable evidence that vaccinating them wasn’t an immediate priority8. “It was a very big achievement for us,” says Patalon.
Extracting new insights from the vast amounts of public-health data that are being collected globally is key to advancing treatments and keeping one step ahead of infectious diseases, says Patalon. As part of her role at KSM, she oversees the Tipa Biobank, Israel’s largest biosample repository, comprising more than one million blood samples from some 200,000 Maccabi patients. In addition to one-off samples from patients, the biobank collects serial samples — successive samples from the same patient over a period of time. Serial samples are “very rare and highly valuable for research”, says Patalon, especially when it comes to analysing biological changes before and after a diagnosis.
KSM also manages some 30 years’ worth of electronic medical records from more than 2.7 million patients collected by 32 hospital networks that are affiliated with Maccabi. By sharing these data, which have been deidentified, with researchers around the world, Patalon hopes to inform artificial-intelligence-powered innovations in diagnosis and treatment. “These collaborations, I believe, will create the future of medicine,” she says.
Being adaptable as a researcher and a leader is crucial, particularly in times of crisis, says Patalon, whose team has been deeply affected by the war in Gaza.
“This is a time that requires a lot of patience, empathy, emotional support and the building of good relationships. We have to come out of this situation stronger.” — Sandy Ong
SARAH LUO: Hunting hunger pathways
Sarah Lou’s team discovered one of the brain’s many feeding regulatory centres.Credit: Agency for Science, Technology and Research (A*STAR)
Sarah Luo’s fascination with neuroscience sparked when, as an undergraduate student at the University of Wisconsin-Madison in Wisconsin, she was introduced to the work of British neurologist and author, Oliver Sacks.
Known for his empathic approach to patients with conditions such as amnesia, face blindness and Tourette’s syndrome, Sacks “brought a very humanizing perspective to brain disorders”, says Luo. “He showed how even minute changes in certain regions of the brain could lead to profound effects on cognition and behaviour.”
Today, Luo runs a lab at Singapore’s Agency for Science, Technology and Research (A*STAR), where she studies the connection between hunger and the brain to help patients with metabolic disorders such as diabetes and fatty liver disease. She first studied this connection as a postdoctoral fellow in an adjacent lab, where she was part of a team that discovered a mechanism that regulates feeding.
For many years, researchers had assumed that hunger is regulated by two types of neurons: one that drives hunger and another that suppresses it. But when Luo and her colleagues ran experiments that stimulated certain neurons in a region of the brain called the tuberal nucleus, they could prompt mice to start eating even when they weren’t hungry9. “There are actually many feeding regulatory centres in the brain, and we discovered one of them,” she says.
These other centres can deal with “more diverse aspects of eating behaviour”, says Luo, including environmental cues that can incite hunger. In a series of follow-up experiments10, Luo and her colleagues observed that when mice were placed in the same feeding chamber where the neurons in the tuberal nucleus had been activated the previous week, they would immediately start eating, even if it was outside their normal feeding times. The results suggest that these neurons not only influence basic feeding behaviour, but also integrate memory and contextual cues into the eating process, says Luo.
Humans experience similar cues. Visiting a favourite restaurant, for example, or returning to the family home can spark an appetite.
“Your neurons might become activated, just because of the environment you’re in,” says Luo. “Those signals might cause you to eat, even if you’re not actually hungry.”
Luo and her team at A*STAR hope to develop treatments that will help to curb excessive food consumption in people with obesity and metabolic conditions by blocking or activating certain neural signals. The trick, she says, is to find and target pathways that run between the brain and organs such as the liver and kidneys, which are more accessible than neural pathways in the brain.
“It would be very invasive to implant an electrode in the brain to activate or inhibit these pathways,” says Luo. But activating pathways that connect to these regions in the brain — by using vagal nerve stimulation, for example, which is a technique used to treat epilepsy that involves implanting a pulse generator under the skin on the chest — would be a more viable option. “Then maybe there will be an easier route for developing therapies to target some of these metabolic diseases,” says Luo. — Sandy Ong
The digital landscape, ever expanding and evolving, has given rise to an increasing number of security vulnerabilities. To address this issue, a new open-source project called the Vulnerability Impact Scoring System (VISS) has been introduced. VISS is designed to enhance security measures by providing a unique assessment tool that measures the impact of vulnerabilities from a defender’s perspective. This innovative approach focuses on the actual impact of potential threats, rather than on their theoretical existence.
Since March 2023, Zoom, a leading video conferencing platform, has been utilizing VISS to assess reward disbursements within its Bug Bounty Program. This program encourages security researchers and product users to uncover and disclose security vulnerabilities, providing them with legal protection. The incorporation of VISS into this program has been instrumental in helping Zoom prioritize vulnerabilities that are most likely to impact them, thus allowing for more efficient use of resources.
The Vulnerability Impact Scoring System analyzes vulnerabilities based on 13 impact aspects. These aspects are categorized into three groups: platform, infrastructure, and data. The resulting score, ranging from 0 to 100, reflects the severity of the impact within a specific environment. This scoring system provides an objective measure of the potential damage a vulnerability could inflict, enabling organizations to prioritize their response efforts accordingly.
ZOOM VISS vulnerability impact scoring
VISS was put to the test during the HackerOne H1-4420 live-hacking event in London in 2023. The event demonstrated the effectiveness of VISS in improving resource allocation and focusing on addressing Critical and High severity vulnerabilities. The implementation of VISS led to a shift in vulnerability report submissions towards these higher severity categories, with a significant reduction observed in medium severity submissions.
This shift towards targeting higher severity vulnerabilities is a testament to the efficacy of VISS. By providing a clear, objective measure of the potential impact of a vulnerability, VISS enables organizations to focus their resources where they are most needed. This, in turn, leads to a more robust and secure digital environment.
VISS is not just a tool for individual organizations, but a global mission to enhance security measures. By providing a comprehensive and objective measure of vulnerability impact, VISS aims to enhance the capabilities of incident response and security teams across the globe. The open-source nature of the project invites contributions to its development, fostering a collaborative approach to improving digital security.
The development and implementation of the Vulnerability Impact Scoring System is a significant stride forward in the realm of digital security. By focusing on the actual impact of vulnerabilities, VISS offers a more realistic and effective approach to managing digital threats. The system’s successful use in Zoom’s Bug Bounty Program and the HackerOne H1-4420 live-hacking event highlights its potential to transform the way organizations respond to security vulnerabilities.
The VISS project is open for exploration and contribution under the GPL 3.0 license at https://github.com/zoom/viss. This open-source project is a testament to the collaborative spirit of the digital community, inviting all to contribute to the ongoing development and enhancement of this innovative security tool. With the continued development and implementation of VISS, the future of digital security looks promising.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
In today’s interconnected world, businesses are expanding globally, and so is the importance of globalization testing. This process involves adapting products, services, and applications to cater to diverse cultural and linguistic preferences. One crucial aspect of globalization testing is cultural sensitivity, which plays a pivotal role in shaping the user experience in various regions.
This blog explores the significance of cultural sensitivity in globalization testing and how it impacts user experience across the globe.
The Globalization Testing Landscape
Globalization testing, often referred to as G11n testing, is an integral part of the software development and product adaptation process. It ensures that software, websites, or applications are capable of functioning effectively in different regions, speaking to the diversity of their users. It encompasses various aspects, such as localization, internationalization, and cultural sensitivity.
Cultural Sensitivity in Globalization Testing
Cultural sensitivity in globalization testing is all about respecting and acknowledging cultural differences when adapting a product for a global audience. It includes considering various elements like language, symbols, colors, graphics, date formats, and even social norms. By incorporating cultural sensitivity, businesses can foster better user engagement and satisfaction in different regions. Here’s how it impacts the user experience:
1. Language Localization
Language is one of the most apparent elements of cultural sensitivity. Ensuring that the content is accurately translated, using proper grammar, idiomatic expressions, and cultural references, is vital. Neglecting this aspect can lead to misunderstandings, confusion, and, ultimately, a negative user experience.
2. Icons, Symbols, and Imagery
Different regions may interpret icons, symbols, and imagery differently. For instance, colors like red may symbolize love in one culture, but danger in another. Understanding these nuances and adapting visual elements accordingly is essential to avoid potential misinterpretations that can deter users.
3. Date and Time Formats
Date and time formats can vary significantly across cultures. Some regions use the day-month-year format, while others prefer month-day-year. Mishandling date and time can lead to misunderstandings and logistical issues that disrupt the user experience.
4. Cultural Appropriateness
Cultural sensitivity extends to avoiding cultural appropriation or stereotypes that can alienate users. By respecting cultural traditions and values, businesses can create a welcoming and inclusive user experience.
5. Navigation and User Interface
The navigation and user interface design should be intuitive and consider the cultural norms of the target audience. For example, some cultures read from right to left, which may require mirroring the layout of certain elements for a more natural user experience.
Impact on User Experience in Different Regions
1. Improved Engagement
Cultural sensitivity enhances user engagement. When users feel that a product or service respects their culture and language, they are more likely to engage with it, leading to increased customer loyalty and user satisfaction.
2. Enhanced User Trust
A culturally sensitive approach builds trust. Users trust products and services that align with their cultural values, norms, and expectations. Trust is a crucial factor in a positive user experience.
3. Expansion Opportunities
When a product or service demonstrates cultural sensitivity, it can more effectively expand into new markets and regions. A lack of cultural sensitivity can hinder market penetration and growth.
4. Reduced Support and Maintenance Costs
Cultural insensitivity can lead to misunderstandings, confusion, and complaints, resulting in higher support and maintenance costs. By incorporating cultural sensitivity in globalization testing, these costs can be minimized.
Case Studies in Cultural Sensitivity
To illustrate the significance of cultural sensitivity in globalization testing services, let’s consider a few case studies:
1. McDonald’s
McDonald’s adapts its menus in various countries to cater to local tastes. For instance, in India, the menu includes a range of vegetarian options to accommodate the predominantly vegetarian diet of many Indians. This cultural sensitivity has been key to McDonald’s success in the Indian market.
2. Airbnb
Airbnb’s approach to user experience is highly localized, offering features like different language support and culturally sensitive search filters. This approach ensures that users can find accommodations that align with their cultural preferences.
3. Coca-Cola
Coca-Cola’s “Share a Coke” campaign involved printing names on their bottles, which was culturally sensitive and engaging to consumers worldwide. It allowed people to find a personal connection with the brand and, consequently, enhanced their user experience.
Conclusion
In a globalized world, cultural sensitivity in globalization testing is a fundamental aspect of ensuring a positive user experience in different regions. By respecting cultural differences and adapting products and services accordingly, businesses can improve engagement, build trust, and seize opportunities for growth. Cultural sensitivity is not just a nice-to-have but a necessity for companies aiming to succeed on a global scale.
The world of medicine is in the midst of a profound transformation, and at the heart of this revolution is a field that deals with the exceptionally tiny – nanotechnology. This captivating blend of science and engineering has ushered in an era where manipulating matter at the nanoscale, with structures smaller than 100 nanometers, is now commonplace. Nanotechnology’s role in medicine, aptly named nanomedicine, has set the stage for remarkable changes in healthcare. In this article, we’ll delve into the myriad ways nanotechnology is influencing the field of medicine, from targeted drug delivery to improved diagnostics and regenerative therapies.
Nanotechnology’s Foundations
Before we explore nanotechnology’s applications in medicine, it’s essential to grasp the fundamental principles of this interdisciplinary science. Nanotechnology operates at a scale where individual molecules and atoms are manipulated to create materials, devices, and systems with unique properties. The ability to engineer matter at such a minute level opens doors to a multitude of applications, including electronics, materials science, and, significantly, medicine.
Precise Drug Delivery:
One of the most compelling aspects of nanotechnology’s influence on medicine is its impact on drug delivery. Traditional drug delivery methods often result in drugs circulating throughout the body, which can lead to side effects and diminished efficacy. Nanoparticles, engineered with precision, offer an innovative solution to this age-old problem.
These tiny carriers can transport drugs directly to their intended destination, reducing side effects and enhancing the treatment’s effectiveness. For instance, in cancer therapy, nanoparticles can be designed to target and destroy cancer cells while sparing healthy tissues, thereby minimizing the harm caused to the patient.
Enhanced Imaging and Diagnostics:
Nanotechnology has also revolutionized medical imaging and diagnostics. It has enabled the development of contrast agents that significantly enhance the quality of images. These agents help in detecting and diagnosing diseases at an earlier stage and with greater accuracy.
For example, the use of quantum dots, nanoscale semiconductor particles, has improved the visualization of tissues and structures. This is particularly critical in early disease detection, as in the case of cancer, where early diagnosis can be a matter of life and death.
Regenerative Medicine:
Regenerative medicine, which focuses on repairing or replacing damaged tissues and organs, stands to benefit immensely from nanotechnology. Nanoscale materials, such as scaffolds and nanoparticles, can mimic the extracellular matrix, stimulating the body’s natural regenerative processes. This offers hope for patients with conditions like spinal cord injuries, osteoarthritis, and other degenerative diseases.
Personalized Medicine:
Nanotechnology plays a pivotal role in enabling personalized medicine. By tailoring treatments to an individual’s genetic makeup, nanomedicine offers the potential for significantly improved treatment outcomes. For instance, nanoparticles can be used to deliver gene therapies designed to address specific genetic mutations, ensuring more precise and effective treatment.
Addressing Antibiotic Resistance:
The rise of antibiotic-resistant bacteria is a growing concern in healthcare. Nanotechnology presents a potential solution by creating nanomaterials capable of targeting and destroying antibiotic-resistant pathogens. This approach holds promise in combating infections that are no longer responsive to traditional antibiotics.
Ethical Considerations:
While the potential of nanotechnology in medicine is vast, it’s essential to consider the associated ethical implications. These include issues related to patient privacy, informed consent, and equitable access to advanced treatments. As nanomedicine continues to advance, addressing these ethical concerns is paramount to ensure that the benefits of these innovations are accessible to all.
In the context of healthcare and medical regulations, a “DEA number lookup by NPI” refers to the process of cross-referencing the National Provider Identifier (NPI) of a healthcare provider with their corresponding Drug Enforcement Administration (DEA) number. This lookup is essential for tracking and verifying the prescribing practices of healthcare professionals, particularly in relation to controlled substances. It is a critical tool in maintaining the integrity of healthcare and ensuring that the prescription of controlled substances follows established regulations and guidelines. This integration of technology, like NPI and DEA number lookup, exemplifies how innovation, even at the nano level, permeates every facet of the healthcare industry.
Conclusion
In conclusion, nanotechnology’s impact on medicine is undeniable. It has the potential to revolutionize the way we diagnose and treat diseases, make drug delivery more effective, and even address antibiotic resistance. As the field of nanomedicine continues to advance, it is crucial to overcome challenges and address ethical concerns to ensure that these innovations benefit all of humanity.