Categories
Life Style

Will AI accelerate or delay the race to net-zero emissions?

[ad_1]

Artificial intelligence (AI) is already transforming the global economy. Companies are investing hundreds of billions of dollars each year in these technologies. In almost every sector, AI is being used to drive operational efficiencies, manage complexity, provide personalized services and speed up innovation.

As AI’s influence on society grows, questions arise about its impact on greenhouse-gas emissions: will its myriad applications help to reduce the world’s carbon footprint or hinder climate progress? The answer will depend on how AI models are developed and operated, and what changes result from their use. And scientists simply don’t know how all that will pan out — a worrying situation when there is so much at stake.

Most discussions so far about AI’s environmental consequences have focused on the direct impacts of these computationally intensive technologies — how much energy, water or other resources they consume and the amount of greenhouse gases they generate. But the global repercussions of AI applications for society will be much broader, from transforming health care and education to increasing the efficiency of mining, transportation and agriculture.

Such AI-driven changes can lead to indirect effects on emissions, which might be positive or negative. These indirect effects also need to be taken into account, and could vastly exceed those from direct impacts1,2. Assessments of all types of AI impact are urgently needed. Here’s what we know and what we don’t.

Uncertainty ahead

The direct impacts of AI on climate so far are relatively small. AI operations for large models require millions of specialized processors in dedicated data centres with powerful cooling systems. AI processors installed in 2023 consume 7–11 terawatt hours (TWh) of electricity annually, which is about 0.04% of global electricity use3. That is less than for cryptocurrency mining (100–150 TWh) and conventional data centres (500–700 TWh), which together accounted for 2.4–3.3% of global electricity demand in 2022, according to the International Energy Agency (IEA). Thus, in terms of total global greenhouse-gas emissions, we calculate that AI today is responsible for about 0.01%, on the basis of IEA assessments showing that data centres and transmission networks together account for about 0.6% (see go.nature.com/3q7e6pv).

AI use is expanding rapidly. Over the past decade, the compute capacity used to train advanced large language models has increased tenfold each year. Demand for AI services is expected to rise by 30–40% annually over the next 5–10 years. And more powerful AI models will require more energy. One estimate suggests that, by 2027, global AI-related energy consumption could be 10 times greater than it was in 20233, or about as much as is consumed annually by people watching television in US homes. Although there could be challenges for local electricity grids in regions where many data centres are based, from a global perspective, AI should not lead directly to large, near-term increases in greenhouse-gas emissions.

Improvements in energy efficiency could offset some of the projected increase in power demand, as they did when data centres expanded in the 2010s4. More-efficient AI algorithms, smaller models and innovations in hardware and cooling systems should help5,6. For example, small language models, such as Microsoft’s Phi-2 and Google’s Gemini Nano, can run on mobile phones and deliver capabilities previously seen only with the largest models. AI companies are increasingly investing in renewable power and setting up operations in countries or regions with abundant clean-energy supplies, such as Iceland.

Indirect effects are less clear, however. Some AI applications are designed to tackle climate change, for example to reduce emissions from the energy and transport sectors, from buildings and industry operations and from land use. Optimizing supply chains will make manufacturing more efficient and support the integration of renewable energy into electricity grids. Speeding up the development of new materials for batteries and renewable energy7,8 will be a boon.

There could also be some negative indirect impacts. Embedding AI into existing applications, from health care to entertainment, might drive more electricity use. Oil and gas exploration and extraction could become cheaper, potentially driving up production. And without proper governance, the widespread use of AI could affect political and economic stability, with ramifications for poverty, food security and social inequalities — all of which could have knock-on effects for emissions9.

And that’s just existing AI systems. How will future AI technologies develop? How will their expansion affect the global economy? And how will this affect decarbonization? Researchers simply don’t know; it’s too early to tell. It is tempting to simply extrapolate past AI electricity-use trends into the future, but overlooking social, economic and technological factors often results in large forecasting errors4,5,10. Similarly, an overly simplistic view of the impacts of indirect emissions risks underestimating AI’s potential for accelerating important climate-solution breakthroughs, such as the development of less expensive and more powerful batteries in months rather than decades11.

AI-driven emissions scenarios

Recognizing these huge uncertainties, here we call on researchers to develop a set of policy-relevant scenarios to quantify the effects that AI expansion could have on the climate under a range of assumptions. Routinely used by financial institutions to understand risks and opportunities and plan investments, scenarios combine quantitative models with expert consultations. Rather than making predictions, they explore many possible futures based on influential factors.

The interior of Advania's Thor Data Center with a sign that reads '100% green energy natural free cooling'

A data centre near Reykjavik uses renewable energy for cooling.Credit: Sigtryggur Ari/Reuters

Specifically, we recommend that a suite of scenarios be built to better understand how AI expansion might affect emissions, both directly and indirectly. These scenarios should range from a ‘reference’ case without widespread adoption of powerful AI technologies, to an ‘aspirational’ case in which all the United Nations Sustainable Development Goals are achieved; scenarios should also include ones with undesirable outcomes.

Five elements are essential for AI-driven emissions scenarios to be credible and useful.

Link to existing climate scenarios. The climate community already uses integrated assessment models (IAMs) to assess future greenhouse-gas emissions quantitatively on the basis of qualitative narratives about potential socio-economic, demographic, policy, technology and governance outcomes. Five standard scenarios, or Shared Socioeconomic Pathways (SSPs), are widely used. These range from a future in which the world is deeply divided and remains hooked on fossil fuels to a more optimistic scenario of global cooperation, decoupling of economic growth from emissions and serious investment in clean energy.

AI should be integrated into these pathways, along with the global shocks and technological breakthroughs that might accompany it. This would require major work, including incorporating expertise from the AI community, rethinking each of the pathway narratives and exploring whether new ones need to be added. Could AI take the world to a more radically green future, or a more dystopian one? What factors define those outcomes? How plausible are they? Scenarios can help to narrow down answers.

Turning these narratives into quantitative scenarios will require developing new analytical models, collecting new types of data and establishing an institutional structure to enable rapid updates to keep up with the fast pace of societal transformations that AI is driving, as we outline here.

Develop quantitative analytical frameworks. Developing IAMs for exploring the influence of AI will require improved data and analytical frameworks for both direct and indirect impacts. The biggest challenge will be quantifying the range of indirect effects resulting from AI-driven societal transformations, as well as AI-powered innovations on climate-relevant advances and breakthroughs.

For example, AI personalization could encourage sustainable consumption, but it could also increase demand for resource-intensive goods. And disentangling the emissions impacts of AI-enabled innovations from other technologies that lower emissions, such as renewables or carbon capture, will be challenging because the pace of research and development differs across sectors. Policies and regulations are also often slow to catch up. Quantifying the interplay of these dynamics will be difficult.

Comparing and replicating scenarios will be key to improving them as AI systems are rolled out. Researchers should regularly run comparisons between different models for direct and indirect AI-related emissions, coordinated through platforms used by the climate community, such as the Energy Modeling Forum and the Integrated Assessment Modeling Consortium. Scientists must ensure that the data and assumptions in these analyses are fully documented, freely shared and completely replicable by others.

Share data. Data availability is a challenge — especially for fast-moving industries such as AI, in which data are often private or tied to proprietary information. For example, more data are needed on AI workloads in large cloud-computing companies, their electricity and carbon intensity, and trends in efficiencies gained for building and using AI models.

Methods to safely and openly share representative, measured, aggregated and anonymized data without compromising sensitive information are needed. AI can build on examples from other industries — such as the Getting the Numbers Right initiative, which keeps track of carbon dioxide and energy performance indicators in the global cement industry, and the Solomon Energy Intensity Index for fuel refining and pipelines.

Standards should be established for measuring, reporting, verifying and disseminating AI-related data, to ensure both quality and broad accessibility. Recent legislation, such as the European Union’s AI Act and the European Energy Efficiency Directive, could help to drive the development of standards. Although neither regulation directly mandates specific reporting on AI energy consumption, their emphasis on data-centre transparency and efficiency could promote the development of reporting standards.

Issue rapid updates. AI technology is advancing so quickly that scenarios will need to be revised at least once per year, and ideally twice. This is more frequent than is currently done for climate-change scenarios, which are updated every 6–7 years. Annual or biannual updates will be challenging, given the need to collect new data and to develop analytical frameworks as AI systems, applications and breakthroughs emerge.

Because of the potential for AI to either reduce or increase energy demand, researchers must update models that represent societal demand for energy, as well as explore how this demand will change as AI technologies evolve. Scenarios with varying resolutions might be released on different time frames. For example, coarse-resolution scenarios might be updated every few months; more-detailed scenarios could be released every 2–3 years.

Build an international consortium. An international consortium needs to be set up to undertake the development of AI-driven emissions scenarios. It should gather specialists from around the world and represent all the relevant disciplines — from computer and sustainability science to sociology and economics. We suggest this AI-driven emissions-scenario community be co-sponsored by international scientific networks that focus on sustainability, such as the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria, and by international non-governmental organizations focused on AI and society. Examples include the Partnership on AI or the newly established UN Futures Lab, which has been set up to coordinate and improve strategic foresight across the UN to guide long-term decision making.

Consortia that are associated with key IAM and energy-systems models, such as the IEA Technology Collaboration Programme or the IIASA’s programmes, could ensure both open access to data and models, and immediate relevance to the broader climate-scenario modelling communities. The UN and other bodies, such as the International Telecommunication Union in Geneva, Switzerland, should be engaged — but without compromising on the need for agility and speed.

Financial support will be needed to maintain the consortium and support the regular update of scenarios. This could come from a combination of philanthropic, private, governmental and intergovernmental sources.

AI is one of the most disruptive technologies of our time. It’s imperative that decisions around its development and use — today and as it evolves — are made with sustainability in mind. Only through developing a set of standard AI-driven emissions scenarios will policymakers, investors, advocates, private companies and the scientific community have the tools to make sound decisions regarding AI and the global race to net-zero emissions.

[ad_2]

Source Article Link

Categories
Life Style

How AI is being used to accelerate clinical trials

[ad_1]

At the right of the image an illustrated figure holding clipboard looks at abstract depiction of head with brain visible. Three multicoloured vertical bars to left of image.

Credit: Taj Francis

For decades, computing power followed Moore’s law, advancing at a predictable pace. The number of components on an integrated circuit doubled roughly every two years. In 2012, researchers coined the term Eroom’s law (Moore spelled backwards) to describe the contrasting path of drug development1. Over the previous 60 years, the number of drugs approved in the United States per billion dollars in R&D spending had halved every nine years. It can now take more than a billion dollars in funding and a decade of work to bring one new medication to market. Half of that time and money is spent on clinical trials, which are growing larger and more complex. And only one in seven drugs that enters phase I trials is eventually approved.

Some researchers are hoping that the fruits of Moore’s law can help to curtail Eroom’s law. Artificial intelligence (AI) has already been used to make strong inroads into the early stages of drug discovery, assisting in the search for suitable disease targets and new molecule designs. Now scientists are starting to use AI to manage clinical trials, including the tasks of writing protocols, recruiting patients and analysing data.

Reforming clinical research is “a big topic of interest in the industry”, says Lisa Moneymaker, the chief technology officer and chief product officer at Saama, a software company in Campbell, California, that uses AI to help organizations automate parts of clinical trials. “In terms of applications,” she says, “it’s like a kid in a candy store.”

Trial by design

The first step of the clinical-trials process is trial design. What dosages of drugs should be given? To how many patients? What data should be collected on them? The lab of Jimeng Sun, a computer scientist at the University of Illinois Urbana-Champaign, developed an algorithm called HINT (hierarchical interaction network) that can predict whether a trial will succeed, based on the drug molecule, target disease and patient eligibility criteria. They followed up with a system called SPOT (sequential predictive modelling of clinical trial outcome) that additionally takes into account when the trials in its training data took place and weighs more recent trials more heavily. Based on the predicted outcome, pharmaceutical companies might decide to alter a trial design, or try a different drug completely.

A company called Intelligent Medical Objects in Rosemont, Illinois, has developed SEETrials, a method for prompting OpenAI’s large language model GPT-4 to extract safety and efficacy information from the abstracts of clinical trials. This enables trial designers to quickly see how other researchers have designed trials and what the outcomes have been. The lab of Michael Snyder, a geneticist at Stanford University in California, developed a tool last year called CliniDigest that simultaneously summarizes dozens of records from ClinicalTrials.gov, the main US registry for medical trials, adding references to the unified summary. They’ve used it to summarize how clinical researchers are using wearables such as smartwatches, sleep trackers and glucose monitors to gather patient data. “I’ve had conversations with plenty of practitioners who see wearables’ potential in trials, but do not know how to use them for highest impact,” says Alexander Rosenberg Johansen, a computer-science student in Snyder’s lab. “Best practice does not exist yet, as the field is moving so fast.”

Most eligible

The most time-consuming part of a clinical trial is recruiting patients, taking up to one-third of the study length. One in five trials don’t even recruit the required number of people, and nearly all trials exceed the expected recruitment timelines. Some researchers would like to accelerate the process by relaxing some of the eligibility criteria while maintaining safety. A group at Stanford led by James Zou, a biomedical data scientist, developed a system called Trial Pathfinder that analyses a set of completed clinical trials and assesses how adjusting the criteria for participation — such as thresholds for blood pressure and lymphocyte counts — affects hazard ratios, or rates of negative incidents such as serious illness or death among patients. In one study2, they applied it to drug trials for a type of lung cancer. They found that adjusting the criteria as suggested by Trial Pathfinder would have doubled the number of eligible patients without increasing the hazard ratio. The study showed that the system also worked for other types of cancer and actually reduced harmful outcomes because it made sicker people — who had more to gain from the drugs — eligible for treatment.

Area chart showing the number of drugs developed by companies based in six selected countries that made from phase I clinical trials to regulatory submission in 2007 to 2022

Sources: IQVIA Pipeline Intelligence (Dec. 2022)/IQVIA Institute (Jan. 2023)

AI can eliminate some of the guesswork and manual labour from optimizing eligibility criteria. Zou says that sometimes even teams working at the same company and studying the same disease can come up with different criteria for a trial. But now several firms, including Roche, Genentech and AstraZeneca, are using Trial Pathfinder. More recent work from Sun’s lab in Illinois has produced AutoTrial, a method for training a large language model so that a user can provide a trial description and ask it to generate an appropriate criterion range for, say, body mass index.

Once researchers have settled on eligibility criteria, they must find eligible patients. The lab of Chunhua Weng, a biomedical informatician at Columbia University in New York City (who has also worked on optimizing eligibility criteria), has developed Criteria2Query. Through a web-based interface, users can type inclusion and exclusion criteria in natural language, or enter a trial’s identification number, and the program turns the eligibility criteria into a formal database query to find matching candidates in patient databases.

Weng has also developed methods to help patients look for trials. One system, called DQueST, has two parts. The first uses Criteria2Query to extract criteria from trial descriptions. The second part generates relevant questions for patients to help narrow down their search. Another system, TrialGPT, from Sun’s lab in collaboration with the US National Institutes of Health, is a method for prompting a large language model to find appropriate trials for a patient. Given a description of a patient and clinical trial, it first decides whether the patient fits each criterion in a trial and offers an explanation. It then aggregates these assessments into a trial-level score. It does this for many trials and ranks them for the patient.

Helping researchers and patients find each other doesn’t just speed up clinical research. It also makes it more robust. Often trials unnecessarily exclude populations such as children, the elderly or people who are pregnant, but AI can find ways to include them. People with terminal cancer and those with rare diseases have an especially hard time finding trials to join. “These patients sometimes do more work than clinicians in diligently searching for trial opportunities,” Weng says. AI can help match them with relevant projects.

AI can also reduce the number of patients needed for a trial. A start-up called Unlearn in San Francisco, California, creates digital twins of patients in clinical trials. Based on an experimental patient’s data at the start of a trial, researchers can use the twin to predict how the same patient would have progressed in the control group and compare outcomes. This method typically reduces the number of control patients needed by between 20% and 50%, says Charles Fisher, Unlearn’s founder and chief executive. The company works with a number of small and large pharmaceutical companies. Fisher says digital twins benefit not only researchers, but also patients who enrol in trials, because they have a lower chance of receiving the placebo.

Bar chart showing the number of clinical trial subjects by disease type for 2010 to 2022

Source: Citeline Trialtrove/IQVIA Institute (Jan. 2023)

Patient maintenance

The hurdles in clinical trials don’t end once patients enrol. Drop-out rates are high. In one analysis of 95 clinical trials, nearly 40% of patients stopped taking the prescribed medication in the first year. In a recent review article3, researchers at Novartis mentioned ways that AI can help. These include using past data to predict who is most likely to drop out so that clinicians can intervene, or using AI to analyse videos of patients taking their medication to ensure that doses are not missed.

Chatbots can answer patients’ questions, whether during a study or in normal clinical practice. One study4 took questions and answers from Reddit’s AskDocs forum and gave the questions to ChatGPT. Health-care professionals preferred ChatGPT’s answers to the doctors’ answers nearly 80% of the time. In another study5, researchers created a tool called ChatDoctor by fine-tuning a large language model (Meta’s LLaMA-7B) on patient-doctor dialogues and giving it real-time access to online sources. ChatDoctor could answer questions about medical information that was more recent than ChatGPT’s training data.

Putting it together

AI can help researchers manage incoming clinical-trial data. The Novartis researchers reported that it can extract data from unstructured reports, as well as annotate images or lab results, add missing data points (by predicting values in results) and identify subgroups among a population that responds uniquely to a treatment. Zou’s group at Stanford has developed PLIP, an AI-powered search engine that lets users find relevant text or images within large medical documents. Zou says they’ve been talking with pharmaceutical companies that want to use it to organize all of the data that comes in from clinical trials, including notes and pathology photos. A patient’s data might exist in different formats, scattered across different databases. Zou says they’ve also done work with insurance companies, developing a language model to extract billing codes from medical records, and that such techniques could also extract important clinical trial data from reports such as recovery outcomes, symptoms, side effects and adverse incidents.

To collect data for a trial, researchers sometimes have to produce more than 50 case report forms. A company in China called Taimei Technology is using AI to generate these automatically based on a trial’s protocol.

A few companies are developing platforms that integrate many of these AI approaches into one system. Xiaoyan Wang, who heads the life-science department at Intelligent Medical Objects, co-developed AutoCriteria, a method for prompting a large language model to extract eligibility requirements from clinical trial descriptions and format them into a table. This informs other AI modules in their software suite, such as those that find ideal trial sites, optimize eligibility criteria and predict trial outcomes. Soon, Wang says, the company will offer ChatTrial, a chatbot that lets researchers ask about trials in the system’s database, or what would happen if a hypothetical trial were adjusted in a certain way.

The company also helps pharmaceutical firms to prepare clinical-trial reports for submission to the US Food and Drug Administration (FDA), the organization that gives final approval for a drug’s use in the United States. What the company calls its Intelligent Systematic Literature Review extracts data from comparison trials. Another tool searches social media for what people are saying about diseases and drugs in order to demonstrate unmet needs in communities, especially those that feel underserved. Researchers can add this information to reports.

Zifeng Wang, a student in Sun’s lab, in Illinois, says he’s raising money with Sun and another co-founder, Benjamin Danek, for a start-up called Keiji AI. A product called TrialMind will offer a chatbot to answer questions about trial design, similar to Xiaoyan Wang’s. It will do things that might normally require a team of data scientists, such as write code to analyse data or produce visualizations. “There are a lot of opportunities” for AI in clinical trials, he says, “especially with the recent rise of larger language models.”

At the start of the pandemic, Saama worked with Pfizer on its COVID-19 vaccine trial. Using Saama’s AI-enabled technology, SDQ, they ‘cleaned’ data from more than 30,000 patients in a short time span. “It was the perfect use case to really push forward what AI could bring to the space,” Moneymaker says. The tool flags anomalous or duplicate data, using several kinds of machine-learning approaches. Whereas experts might need two months to manually discover any issues with a data set, such software can do it in less than two days.

Other tools developed by Saama can predict when trials will hit certain milestones or lower drop-out rates by predicting which patients will need a nudge. Its tools can also combine all the data from a patient — such as lab tests, stats from wearable devices and notes — to assess outcomes. “The complexity of the picture of an individual patient has become so huge that it’s really not possible to analyse by hand anymore,” Moneymaker says.

Xiaoyan Wang notes that there are several ethical and practical challenges to AI’s deployment in clinical trials. AI models can be biased. Their results can be hard to reproduce. They require large amounts of training data, which could violate patient privacy or create security risks. Researchers might become too dependent on AI. Algorithms can be too complex to understand. “This lack of transparency can be problematic in clinical trials, where understanding how decisions are made is crucial for trust and validation,” she says. A recent review article6 in the International Journal of Surgery states that using AI systems in clinical trials “can’t take into account human faculties like common sense, intuition and medical training”.

Moneymaker says the processes for designing and running clinical trials have often been slow to change, but adds that the FDA has relaxed some of its regulations in the past few years, leading to “a spike of innovation”: decentralized trials and remote monitoring have increased as a result of the pandemic, opening the door for new types of data. That has coincided with an explosion of generative-AI capabilities. “I think we have not even scratched the surface of where generative-AI applicability is going to take us,” she says. “There are problems we couldn’t solve three months ago that we can solve now.”

[ad_2]

Source Article Link

Categories
Life Style

Take these steps to accelerate the path to gender equity in health sciences

[ad_1]

Diversity in science is instrumental in achieving major breakthroughs. Without further accelerating gender parity and other types of diversity — including focusing on the needs of those in and working towards leadership roles — we will continue to lose valuable ground. At a time when academia faces some of its greatest workforce gaps in history, some of our brightest scholars are leaving institutions before reaching their full potential due to a lack of recognition.

Portrait image of Christina Mangurian

Christina MangurianCredit: UCSF

We applaud changes that have been made for early-career researchers, with more women and historically excluded scholars entering research-training institutions now than ever before. But too often, we lose out on investments made by government funders and institutions in early-career researchers because the system was not built to increase the diversity of leaders as they move up the career ladder.

For 25 years, women have made up more than 40% of the medical student body in the United States, but less than 20% of department chairs in academic medicine. Without a major policy shift to accelerate the rate of diversification among leaders in the country, it will take 50 years for academic medicine to reach gender parity1. That’s way too long.

We must address this with urgency, as women’s perspectives and leadership are key in developing new therapies and improving representation in clinical trials. We need more role models for trainees and junior faculty. All of this leads to pipeline retention and more innovative discovery.

Portrait image of Claire D. Brindis

Claire D. BrindisCredit: Marco Sanchez, UCSF Documents and Media

So, what do we do? We must re-evaluate the way the entire scientific academic enterprise is set up to directly, and indirectly, create challenging climates for women, especially for women of colour. Below, we focus on the policies and procedures that would offer the highest yield in the context of the United States, but that have global relevance.

Elevate the status of gender equity on campus

Public policy value statements. Commitments by academic leaders to diversity measures must be backed by strong policies, protocols and actions directed at all career stages, but particularly focused on supporting emerging and senior women leaders. Organizations must hold leaders accountable for incidents of bias, discrimination and bullying and institute formal, tailored training to promote allyship for some, and active rehabilitation for others.

Confidential reporting. We need better reporting systems to ensure that researchers can highlight gender disparities without fear of retaliation. Ombudsman and whistleblower offices can be helpful, but in the United States, many of these are understaffed to meet the demand. There is also an urgent need to test which approaches are most effective at correcting behaviour.

Implement institutional family-friendly policies

Childbearing/rearing leave. In the United States, there have been gains for faculty members at some institutions and major gains nationally for trainees. But there is room to improve, such as provision of affordable, on-site childcare.

Lactation policies. Only 8% of US medical schools provide financial incentives to make up for clinical time lost while lactating in the first 12 months post-birth. Institutions should be leading the way in establishing policies that recognize the biological factors impacting careers.

Elder care and other informal care. A 2023 study2 found that close to half of female faculty are informal caregivers, and close to half are providing elder care as they reach mid-career. Given that institutions are competing to attract mid- or senior-level women, expansion of paid leave policies to include elder care is warranted.

Formalize equitable distribution of resources and access to opportunities

Compensation. Institutions should regularly perform salary reviews as a means of correcting disparities, especially as it pertains to women of colour. Leaders should also regularly review starting salaries, distribution of endowed chairs, salary increases that are far above the norm and recruitment and retention packages.

Sponsorship. Mentoring and sponsorship roles are increasingly recognized, but more oversight is needed. Behind closed doors is where decisions are made as to who gains access to crucial leadership opportunities; making the invisible visible is key to assuring greater institutional equity.

Focus on faculty promotion and retention

Resources. Offering equitable start-up packages and discretionary funds for new faculty members as well as compensation for dedicated mentors for historically excluded early career researchers can create a supportive professional environment. Such resources are important to offset the time requirements placed on excluded groups who are frequently asked to serve on campus and department committees to meet diversity metrics.

Peer support. Community affinity groups facilitate knowledge exchange needed for career advancement, as well as ‘real time’ support for faculty members. They are easy to set up and yield high returns for participants.

A multi-pronged approach is needed to accelerate gender parity in academic medicine leadership. Rather than continue to attribute disparities to individual ‘failures’, institutions must recognize that structural and organizational interventions can make transformational change.

Competing Interests

The authors declare no competing interests.

[ad_2]

Source Article Link

Categories
News

Lenovo and Anaconda join forces to accelerate AI development

Lenovo and Anaconda join forces to accelerate AI development

A new partnership has been announced this month between Lenovo and Anaconda that promises to help both data scientists and AI professionals. Lenovo, a giant in the computing industry, has joined forces with Anaconda, a powerhouse in open-source AI and data science software. This collaboration is set to streamline the AI development process, offering a suite of advanced tools that will empower data scientists to push the boundaries of AI innovation.

Lenovo’s high-performance data science workstations, known for their robust capabilities, are being integrated with Anaconda’s comprehensive software suite. This means that as a data scientist, you’ll have access to enterprise-grade open-source software that’s been optimized to run seamlessly on Lenovo’s ThinkStation and ThinkPad workstations. The focus of this integration is not just on performance but also on the security and privacy of data, which is a paramount concern in the AI industry today.

Anaconda Lenovo AI development

One of the most significant advantages of this partnership is the cost-effectiveness it brings to the table. Cloud-based AI services can be expensive, but Lenovo and Anaconda are offering a solution that’s easier on the budget without compromising on quality or capability. With the power of Intel processors and NVIDIA GPUs, Lenovo’s workstations are now even more adept at handling complex AI tasks, such as optimizing large-language models. These machines are designed to be versatile, complementing cloud-based AI resources and enhancing productivity for AI professionals.

“With Lenovo’s trusted workstation leadership and Anaconda’s trusted leadership in open-source software support and reliability, the partnership is a perfect match,” said Rob Herman, Vice President and General Manager, Workstation and Client AI Group at Lenovo. “We’re excited to activate this partnership to aid data scientists in pushing forward the capabilities of AI with our premium workstations portfolio and Anaconda’s stellar open-source packages and repositories.”

Lenovo’s range of workstations is scalable, catering to various AI workflows, industries, and budgets. Whether you’re working on small-scale experiments or involved in large-scale enterprise deployments, there’s a Lenovo workstation that’s tailored to your needs. The partnership has also led to the pre-installation of Anaconda Navigator on Lenovo workstations. This provides a user-friendly interface for managing packages and environments, which is secure and simplifies the AI development process.

“As artificial intelligence and machine learning models grow increasingly complex, high-performance workstations are imperative to empower data scientists with advanced capabilities,” said Chandler Vaughn, Chief Product Officer at Anaconda. “Lenovo’s leadership in supplying optimized workstations, featuring robust GPUs, memory, and storage, positions them as an ideal collaborator for Anaconda and our Navigator desktop product. By jointly providing resilient hardware and trusted software tools, Lenovo and Anaconda present data scientists, AI developers, and AI engineers an unrivaled platform to freely explore emerging techniques in AI/ML. This symbiotic relationship enables organizations to push boundaries and accelerate innovations in artificial intelligence without technological constraints.”

This strategic alliance between Lenovo and Anaconda is poised to deliver the tools that data scientists require to excel in AI development and deployment. By combining Lenovo’s high-performance hardware with Anaconda’s expertise in open-source software, the partnership is expected to enhance the data science experience, driving innovation and accelerating advancements in AI.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to accelerate ML with AI Cloud Infrastructure

The digital environment and business have never been as demanding as they are now. An ever-increasing competition creates a need for new solutions and tools to elevate the efficiency of performance and maximize the output of enterprises and companies involved.

Machine learning (ML) is one of the core features of modern business functioning. Despite being introduced a long time ago, it is now that it is unleashing its true potential, optimizing the workflow of every company implementing it.

With all the beneficial features machine learning offers today, there is still lots of room for improvement. The recent development of the digital sphere features a powerful combination of machine learning and AI cloud services. The Gcore AI Cloud Infrastructure exemplifies this trend, offering a robust platform that elevates machine learning capabilities to new heights. What are the expectations of such a merger and how to implement it? Let’s follow the guide.

accelerate ML with AI Cloud Infrastructure

What Is Machine Learning?

Machine learning (ML) is a subcategory of artificial intelligence, which aims to imitate the behavioral and mental patterns of humans. Gcore says ML algorithms learn from massive volumes of historical data patterns and statistical models, which lets them make predictions, create data clusters, generate new content, automate routine jobs, etc. It makes these without explicit programming.

What Is AI Cloud Infrastructure?

Cloud computing has started a new era in the delivery of computing services. It introduced a new layer of convenience, as the users can reach the services, storage, databases, software, and analytics through the cloud (the Internet), without the need to build an on-premise hardware infrastructure.

According to Google, cloud computing is typically represented in three forms: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).

Cloud computing alone is one of the cornerstones of a sustainable digital presence; however, its beneficial nature has been improved by introducing AI tools.

When AI and cloud computing are merged, the capabilities of both just double. Cloud computing provides the resources and infrastructure to train AI models and successfully deploy them in the cloud, while AI is used to automate routine or complex tasks in the cloud, optimizing the overall system performance.

The Benefits of AI Cloud Computing

  1. Maximized efficiency – as long as the AI algorithms automate numerous processes of system functioning, it leads to improved efficiency of the system, and reduced downtime.
  2. Improved Security – AI is trained to detect data breaches and system malfunctioning, preventing all potential threats. It can also analyze the behavioral patterns of users, spot anomalies, and thus, prevent access to potentially dangerous traffic.
  3. Predictive analytics – AI analytics provides valuable insights into the user’s behavior, current trends, demands, etc. Such data lets organizations and companies make informed and timely decisions regarding service updates and optimization.
  4. Personalization – AI algorithms can fully personalize the user’s journey, which improves the user experience and elevates the level of customer satisfaction.
  5. Scalability – By implementing AI, cloud systems can scale up or down their resources and performance regarding the number of applications, variability of data, locations, etc.
  6. Cost reductions – With the help of AI analytics and its timely insights, companies can optimize the usage of their inventory and financial resources, preventing over- or under-stocking of inventory

accelerate ML with AI Cloud Infrastructure

Benefits of Machine Learning in AI Cloud Infrastructure

AI Cloud Infrastructure enhances the capabilities of machine learning. After the algorithms are built, the models are deployed into the cloud computing clusters. The main benefits are the following:

  • No need for large financial investments. The businesses can side with on-demand pricing models and implement machine learning algorithms.
  • Businesses can scale their production and services according to the demand, growing the capabilities of machine learning. Moreover, they can experiment with a variety of algorithms without the need to invest in hardware.
  • The AI cloud environment lets businesses access machine learning capabilities without advanced skills in data science and artificial intelligence.
  • The AI cloud environment enhances the performance of GPUs without additional investments into the hardware.

How to speed up ML with the help of AI Cloud Infrastructure?

Choose the cloud platform

Machine learning capabilities can only be fully unleashed with the right platform. There are numerous providers of cloud services, each one promising specific services, features for ML, and pricing policies.

Among the most recognised platforms are Google Cloud AI Platform, Amazon SageMaker, Microsoft Azure Machine Learning, IBM Watson Studio, AI IPU Cloud Infrastructure by GCore, etc.

When comparing the platform, it is important to check the key features and aspects – security, scalability opportunities, pre-built models, libraries, integration opportunities, flexibility, customization, and pricing options.

Exploit GPUs and TPUs

The main benefit of cloud services is an ability to to use powerful hardware to accelerate machine learning without the need to develop the on-premises infrastructure.

GPUs (graphic processing units) and TPUs (tensor processing units) are the two devices that enable the processing of large amounts of data and complex operations much faster than CPU (central processing units). Such time efficiency reduces the time and cost for building the algorithms and training the models.

Optimize model architecture and hyperparameters

The model architecture refers to its structure and design; the hyperparameters are the set of rules that establish and monitor the behavior of the model. When the two are co-tuned, it benefits the accuracy, efficiency of the model.

The usage of the right cloud service helps to speed up the process of optimization.

Introduce cloud-based model serving and monitoring

Model serving makes it available for deployment, while the model monitoring keeps track of its performance.

The usage of AI Cloud services speeds up the deployment of the model, benefits its functioning, and brings insights into its performance.

The Final Thoughts

Machine learning alone is an efficient solution for improving the performance of any business involved. When it is combined with AI Cloud services and infrastructure, it becomes the essential tool for streamlining the workload, maximizing the efficiency of performance, thus, increasing the ROI, profits and overall functioning of the system.

Filed Under: Guides





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to Accelerate Your Business Communication With ChaGPT

Business Communication

In today’s dynamic and ever-changing world of business communication, ChatGPT emerges as a groundbreaking innovation, transforming the way companies interact and exchange information. This article aims to thoroughly examine the multifaceted ways in which ChatGPT can be utilized to not only enhance but also significantly speed up communication processes within a corporate setting. Our exploration will encompass a detailed look at the diverse functionalities of ChatGPT, shedding light on how it can be practically applied in various business contexts.

Additionally, we will delve into effective integration strategies that can seamlessly incorporate ChatGPT into existing communication systems. This comprehensive analysis will also include careful consideration of the potential challenges and obstacles that businesses may face while adopting this advanced technology. By offering a complete and nuanced perspective, this article serves as an invaluable guide for businesses that are eager to embrace and capitalize on the capabilities of this state-of-the-art tool, ensuring they are well-equipped to navigate the complexities of modern business communication.

Understanding ChatGPT

ChatGPT, a variant of the GPT (Generative Pre-trained Transformer) model, is designed to understand and generate human-like text. Its training involves a vast array of text data, enabling it to respond in a contextually appropriate manner. For businesses, this means a tool capable of handling various aspects of communication, from customer service to internal coordination.

1. Enhancing Customer Interactions

A. Customer Support: ChatGPT can manage routine customer queries, providing quick and accurate responses. This not only improves customer satisfaction but also frees up human agents for more complex issues.

B. Personalized Communication: By analyzing previous interactions and customer data, ChatGPT can tailor conversations, making customers feel understood and valued.

2. Streamlining Internal Communication

A. Automating Routine Tasks: ChatGPT can handle scheduling, email responses, and information retrieval, thus reducing the time spent on mundane tasks.

B. Training and Development: ChatGPT can be used for onboarding new employees, offering interactive training modules, and providing instant answers to work-related queries.

3. Business Intelligence and Analytics

ChatGPT can process and summarize large volumes of data, offering insights and reports. This can be pivotal in strategy formulation and decision-making processes.

4. Marketing and Public Relations

A. Content Creation: From drafting press releases to creating engaging social media content, ChatGPT can assist in maintaining a consistent brand voice.

B. Market Research: By analyzing social media and other online platforms, ChatGPT can provide valuable insights into market trends and consumer preferences.

Integration Strategies

1. Customization and Training: Tailoring ChatGPT to your business needs involves training it with specific data relevant to your industry and company.

2. Multichannel Integration: Integrating ChatGPT across various communication channels (email, social media, SMS) ensures a seamless customer experience.

3. User Interface Considerations: Designing an intuitive interface for ChatGPT interactions enhances user engagement, whether it’s for employees or customers.

Addressing Challenges and Ethical Considerations

1. Privacy and Security: Ensuring that ChatGPT interactions comply with data protection laws and maintaining customer trust is crucial.

2. Miscommunication Risks: Setting up protocols to handle misunderstandings or incorrect information provided by ChatGPT is important for maintaining credibility.

3. Balancing AI and Human Interaction: While ChatGPT can handle a significant portion of communication, identifying scenarios where human intervention is preferable is key to maintaining a personal touch.

Summary

ChatGPT stands as a monumental shift in the landscape of business communication, epitomizing a new era where efficiency, deep personalization, and an extensive spectrum of functionalities are at the forefront. This transformational technology, when integrated with strategic thoughtfulness and foresight, has the potential to not only revolutionize the way businesses communicate internally and externally but also to provide them with a significant competitive advantage in the rapidly evolving and highly competitive market. The process of integrating ChatGPT involves navigating through a series of potential challenges and adapting to the unique needs of each business, thereby customizing the experience and maximizing the benefits.

As we continue to observe and participate in the relentless progress of artificial intelligence, it becomes increasingly clear that tools like ChatGPT are set to play an increasingly pivotal and influential role in shaping business communication strategies. Their ability to adapt, learn, and provide tailored interactions makes them indispensable in an age where speed, accuracy, and personal touch in communication are paramount. The future trajectory of AI and its integration into business practices, particularly in communication, suggests a landscape where ChatGPT and similar technologies will not only be adjunct tools but essential components in driving business growth, innovation, and customer engagement in an ever-more connected and digital world.

Filed Under: Guides





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to accelerate your productivity with Google Bard

accelerate your productivity with Google Bard

This guide is designed to show you how to accelerate your productivity with Google Bard. In today’s dynamic and demanding world, productivity has become a cornerstone of success. Whether you are a student deeply immersed in the challenging world of academics, grappling with a multitude of assignments and projects, an ambitious entrepreneur passionately pursuing your dreams and setting new benchmarks, or a corporate executive skillfully navigating the intricate and often unpredictable terrains of the business landscape, the ability to effectively optimize your time and augment your output has become more crucial than ever.

This relentless pursuit of efficiency and excellence in every endeavor necessitates the adoption of innovative strategies and tools. In this context, amidst the constantly changing technological landscape, groundbreaking tools like Google Bard have emerged as indispensable allies. These advancements in technology serve as beacons, guiding individuals and professionals in their quest to attain heightened levels of productivity and efficiency, thereby transforming the way we approach our goals and tasks in this dynamic world.

What is Google Bard?

Google Bard is a large language model (LLM) created by Google AI. LLMs are trained on massive amounts of text data, allowing them to generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way. In short, Google Bard is a powerful tool that can be used for a variety of purposes, including boosting productivity.

How to Use Google Bard to Accelerate Productivity

Here are some specific ways you can use Google Bard to accelerate your productivity:

1. Brainstorming Ideas

When creativity stalls and fresh ideas elude you, Google Bard steps in as a trusted brainstorming partner. Simply pose a query seeking innovative concepts for a new project, marketing campaign, or blog post. Google Bard, armed with its vast knowledge and understanding of human language, will diligently generate a plethora of ideas, sparking your creative engine and propelling you forward.

2. Content Creation

Confronting the daunting task of content creation? Google Bard proves to be an invaluable companion, transforming blank pages into compelling narratives. Request assistance in crafting blog posts, articles, social media content, or even entire books. Google Bard, with its mastery of language and storytelling, will effortlessly produce high-quality content tailored to your specific needs.

3. Research and Analysis

Navigating the labyrinth of research and analysis can be a time-consuming endeavor. Google Bard, however, alleviates this burden by providing swift and comprehensive insights. Simply instruct it to gather information on a particular topic, summarize intricate research papers, or analyze data to uncover hidden patterns.

4. Task Management

Amidst the whirlwind of tasks and deadlines, Google Bard emerges as an organizational savior. Request assistance in creating detailed to-do lists, setting timely reminders, and tracking progress on ongoing projects. With Google Bard as your task management maestro, you’ll maintain control over your workload and avoid the pitfalls of procrastination.

5. Communication and Collaboration

Effective communication and collaboration are essential for success in any endeavor. Google Bard empowers you to excel in these areas by providing guidance on crafting persuasive arguments, preparing for presentations, and writing clear and concise emails. Additionally, its language translation capabilities facilitate seamless communication with colleagues or clients from diverse linguistic backgrounds.

Tips for Getting the Most Out of Google Bard

To maximize the benefits of Google Bard, consider these helpful tips:

  • Clarity is Key: Formulate your requests with precision and clarity to ensure Google Bard accurately grasps your intent.
  • Simplicity Matters: Avoid overly complex or jargon-filled language, as Google Bard may struggle with such expressions.
  • Proofreading for Perfection: While Google Bard is remarkably capable, it’s advisable to proofread your results to ensure accuracy and polish.

Conclusion

With Google Bard by your side, you are set to embark on an exhilarating adventure of elevated productivity and efficiency. This journey, enriched by the pioneering capabilities of Google Bard, unlocks unprecedented levels of efficiency and effectiveness in your daily tasks. As you delve deeper into the realm of advanced technology, you’ll find yourself achieving your goals with remarkable ease and precision. Embrace the dynamic power of artificial intelligence as it reshapes and enhances your workflow in ways you never imagined. Allow Google Bard to be not just a tool, but a trusted guide, an innovative collaborator, and a powerful catalyst that propels you towards unparalleled success. Witness firsthand the transformative impact of this cutting-edge technology and how it can revolutionize the way you work, think, and achieve.

Filed Under: Guides





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.