Categories
News

2024 Cybersecurity trends with the evolution of artificial intelligence

2024 Cybersecurity trends with the evolution of artificial intelligence

As we enter 2024, the cybersecurity landscape is evolving at a rapid pace. With each passing day, the sophistication of cyber threats increases, and the need for robust security measures becomes more pressing. In this ever-changing digital world, it’s imperative for individuals and organizations alike to stay informed and prepared to protect their digital assets. Here are some of the 2024 cybersecurity trends that are expected to dominate this year say researchers at IBM.

  • AI-based threats are anticipated to grow, with AI being used to create more convincing phishing emails.
  • A shift from traditional passwords to passkeys is expected, with the adoption of the FIDO standard, enhancing security and user convenience.
  • Deepfake technology will likely become more sophisticated and widespread, necessitating education and security measures beyond detection.
  • Generative AI may lead to ‘hallucinations’ or inaccuracies in information, which could pose security risks. Technologies like retrieval-augmented generation (RAG) may help improve accuracy.
  • AI will also play a positive role in cybersecurity, aiding in threat anticipation and case summarization, while cybersecurity will be essential to ensure AI’s trustworthiness.
  • Persistent threats include data breaches, with costs continuing to rise, and ransomware attacks becoming faster to execute.
  • Multifactor authentication is becoming more common as a security measure.
  • Internet of Things (IoT) threats have increased, with a significant rise in attacks.
  • Quantum computing remains a potential future threat to cryptography but has not yet had a significant impact.
  • The cybersecurity skills gap has shown some improvement, with a decrease in open positions, but the need for skilled professionals remains high.

One of the most significant developments in the realm of cybersecurity is the use of artificial intelligence (AI). AI is enhancing the capabilities of cyber defense systems, but it’s also being wielded by cybercriminals. They are using AI to create phishing emails that are so well-crafted they can be hard to distinguish from legitimate messages. To combat this, the adoption of AI-powered security systems is essential. These systems can identify and mitigate the threat posed by these advanced phishing attempts.

Another trend that’s gaining traction is the move towards passwordless authentication. The traditional password system is becoming obsolete, making way for more secure methods such as the FIDO standard, which relies on passkeys. These new authentication tools, which can be physical or digital, don’t require users to remember complex passwords and are designed to reduce the risk of security breaches.

The emergence of deepfake technology is another challenge on the horizon. These hyper-realistic audio and video forgeries are becoming more convincing and widespread, posing a serious threat to personal and corporate security. To defend against the malicious use of deepfakes, education and the implementation of advanced security measures are crucial.

2024 Cybersecurity trends

Here are some other articles you may find of interest on the subject of artificial intelligence :

In the fight against misinformation, generative AI plays a dual role. While it can produce content that mimics human writing, it can also be used to generate false or misleading information. Technologies like retrieval-augmented generation (RAG) are being developed to enhance the reliability of generative AI by incorporating accurate data during the content creation process, helping to curb the spread of misinformation.

Despite the potential risks, AI remains an invaluable tool in the arsenal of cyber defense. The challenge lies in ensuring that the AI systems themselves are secure and reliable. As we rely more on these systems, their integrity becomes a cornerstone of our digital security.

The issues of data breaches and ransomware are not new, but they continue to escalate in both frequency and severity. The costs associated with these incidents are soaring, highlighting the importance of robust security protocols and effective incident response strategies.

As we enhance our security measures, multifactor authentication (MFA) is becoming a standard practice. MFA adds an extra layer of protection, which is increasingly necessary in today’s digital environment. However, as the Internet of Things (IoT) expands, so does the number of attacks on these connected devices. This surge in IoT attacks calls for stronger security measures to protect against potential vulnerabilities.

The advent of quantum computing is another factor that could significantly impact cybersecurity. Quantum computing has the potential to break current cryptographic standards, which means there’s an urgent need to develop quantum-resistant encryption methods to safeguard our data in the future.

A persistent issue in the field of cybersecurity is the skills shortage. Although there has been progress in addressing this gap, continuous education and training are necessary. Equipping the workforce with the skills to tackle new cyber threats is a critical step in strengthening our collective cyber defenses.

As we navigate the complex and dynamic world of cybersecurity in 2024, staying vigilant and proactive is more important than ever. Cyber threats are becoming more sophisticated, and our defenses must evolve to match them. By keeping abreast of these trends and challenges, we can better prepare ourselves to defend against the myriad of threats that lurk in the digital realm.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Misconceptions about artificial intelligence (AI)

Karmel Allison discusses for misconceptions about AI

Artificial intelligence (AI) is a term that often conjures up images of futuristic robots and machines taking over the world. However, Karmel Allison, a technical advisor to Microsoft’s CTO, Kevin Scott, offers a refreshing and insightful perspective on AI that challenges many of the common misconceptions surrounding this technology. With her extensive background in bioinformatics, linguistics, and health care advocacy, Allison provides a nuanced understanding of AI and its implications for our future.

Misconceptions about AI

AI will cause job losses

One of the most pervasive fears about artificial intelligence is that it will lead to widespread job loss. Many people worry that as machines become more intelligent, they will replace human workers, leaving a trail of unemployment in their wake. However, Allison encourages us to look at AI from a different angle. Rather than seeing it as a threat to employment, she views AI as a catalyst for job transformation.

AI has the potential to take over mundane and repetitive tasks, freeing humans to focus on more creative and meaningful work. In the workplace, AI can handle routine administrative duties, allowing employees to concentrate on strategic planning and innovation. This shift can ultimately enhance job roles, making them more fulfilling and valuable.

AI is only for tech people

Another common myth is that artificial intelligence is only relevant to those who are deeply entrenched in the tech industry. Allison dispels this notion by pointing out how AI has already become a part of our daily lives, often without us even realizing it. From the search engines we use to find information online to the personalized shopping recommendations we receive, AI is everywhere.

Its influence extends to various sectors, including health care, where it is revolutionizing patient care with improved diagnostic tools and customized treatment plans. This demonstrates that AI is not just for tech experts; it has practical applications that benefit everyone, regardless of their technical background.

AI is one thing

When people think of misconceptions about artificial intelligence, they often imagine it as a monolithic, all-powerful force. However, Allison clarifies that AI is not a single entity but rather a collection of diverse tools and technologies, each designed for a specific purpose. It’s important to recognize this diversity within AI to select the right tool for the job at hand. Whether it’s automating creative processes or enhancing predictive text, the goal is to match the AI solution with the task to achieve the best results.

AI is inherently biased

One of the most pressing concerns about AI is the potential for it to reflect and amplify existing societal biases. Allison acknowledges this risk but advocates for the responsible use of AI systems. She emphasizes the importance of building AI with diverse data sets and creating fair algorithms to reduce bias. Microsoft’s commitment to responsible AI principles, which focus on fairness, reliability, and transparency, serves as an example of how artificial intelligence can be developed to support human abilities while maintaining ethical standards.

Allison invites us to reconsider our preconceived notions about AI. By understanding AI’s transformative effect on jobs, its ubiquitous presence in our lives, the wide range of its applications, and the ongoing efforts to ensure its ethical use, we can develop a more profound appreciation for AI’s ability to enhance human skills and positively impact various aspects of life. As artificial intelligence becomes more integrated into the fabric of our society, it is crucial to stay informed and engage with AI tools thoughtfully, leveraging their potential to improve both our personal and professional experiences.

Other articles we have written that you may find of interest on the subject of artificial intelligence :

The conversation around misconceptions about AI often revolves around its potential to disrupt industries and change the way we live and work. While these discussions are important, they sometimes miss the more subtle ways in which AI is already influencing our lives. Allison’s insights remind us that AI is not just about the big, transformative changes; it’s also about the small, incremental improvements that make our daily tasks easier and more enjoyable.

Moreover, the ethical considerations of artificial intelligence cannot be overstated. As we continue to develop and deploy AI systems, we must be vigilant in ensuring that they do not perpetuate harmful biases or inequalities. This requires a concerted effort from everyone involved in the creation and implementation of AI, from data scientists and engineers to policymakers and end-users.

Allison’s perspective on artificial intelligence is a call to action for all of us to engage with this technology in a thoughtful and informed manner. By doing so, we can harness the power of AI to create a future that is not only more efficient and productive but also more equitable and just. Whether we are tech enthusiasts or simply users of technology in our everyday lives, we all have a role to play in shaping the trajectory of AI and ensuring that it serves the greater good.

In the end, the true potential of artificial intelligence lies not in its ability to replace humans but in its capacity to augment our abilities and enrich our lives. As we navigate the complexities of this rapidly evolving field, we must keep an open mind and be willing to challenge our assumptions. With leaders like Karmel Allison guiding the conversation, we can look forward to a future where AI is understood, appreciated, and utilized in ways that benefit us all.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

What is Multimodal Artificial Intelligence (AI)?

What is Multimodal Artificial Intelligence

If you have engaged with the latest ChatGPT-4 AI model or perhaps the latest Google search engine, you will of already used multimodal artificial intelligence.  However just a few years ago such easy access to multimodal AI was only a dream. In this guide will explain more about what this new technology is and how it is truly revolutionizing our world on a daily basis.

AI technologies that specialized in one form of data analysis, perhaps text-based chatbots or image recognition software is Single-Modality Learning . But now AI can combine different forms of data such as images, text, photographs, graphs, reports and more for a richer, more insightful analysis. These AI applications are multimodal AI in the already making their mark across many different areas of our lives.

For example in autonomous vehicles, multimodal AI helps in collecting data from cameras, LiDAR, and radar, combined it all for better situational awareness. In healthcare, AI can combine textual medical records with imaging data for more accurate diagnoses. In conversational agents such as ChatGPT-4, multimodal AI can interpret both the text and the tone of voice to provide more nuanced responses.

Multimodal Artificial Intelligence

  • Single-Modality Learning: Handles only one type of input.
  • Multimodal Learning: Can process multiple types of inputs like text, audio, and images.

Older machine learning models were unimodal, meaning they capable of only handling one type of input. For instance, text-based models like the Transformer architecture focus exclusively on textual data. Similarly, Convolutional Neural Networks (CNNs) are geared for visual data like images.

One area of multimodal AI technology you can try is  within OpenAI’s ChatGPT. Now capable of interpreting inputs from text, files and imagery. Another is  Google’s multimodal search engine. In essence, multimodal artificial intelligence (AI) systems are engineered to comprehend, interpret, and integrate multiple forms of data, be it text, images, audio, or even video. This versatile approach enhances the AI’s contextual understanding, thus making its outputs much more accurate.

What is Multimodal Artificial Intelligence?

The limitation here is evident—these models cannot naturally handle a mix of inputs, such as both audio and text. For example, you might have a conversational model that understands the text but fails to account for the tone or intonation captured in the audio, leading to misinterpretation.

In contrast, multimodal learning aims to build models that can process various types of inputs and possibly create a unified representation. This unification is beneficial because learning from one modality can enhance the model’s performance on another. Imagine a language model trained on both books and accompanying audiobooks; it might better understand the sentiment or context by aligning the text with the spoken words’ tone.

Another remarkable feature is the ability to generate common responses irrespective of the input type. In practical terms, this means the AI system could understand a query whether it’s typed in as text, spoken aloud, or even conveyed through a sequence of images. This has profound implications for accessibility, user experience, and the development of more robust systems. Let’s delve deeper into the facets of multimodal learning in machine learning models, a subfield that is garnering significant attention for its versatile applications and improved performance metrics. Key facets of multimodal AI include :

  • Data Types: Includes text, images, audio, video, and more.
  • Specialized Networks: Utilizes specialized neural networks like Convolutional Neural Networks (CNNs) for images and Recurrent Neural Networks (RNNs) or Transformers for text.
  • Data Fusion: The integration of different data types through fusion techniques like concatenation, attention mechanisms, etc.

Simply put, integrating multiple data types allows for a more nuanced interpretation of complex situations. Imagine a healthcare scenario where a textual medical report might be ambiguous. Add to this X-ray images, and the AI system can arrive at a more definitive diagnosis. So, to enhance your experience with AI applications, multimodal systems offer a holistic picture by amalgamating disparate chunks of data.

In a multimodal architecture, different modules or neural networks are generally specialized for processing specific kinds of data. For example, a Convolutional Neural Network (CNN) might be used for image processing, while a Recurrent Neural Network (RNN) or Transformer might be employed for text. These specialized networks can then be combined through various fusion techniques, like concatenation, attention mechanisms, or more complex operations, to generate a unified representation.

In case you’re curious how these systems function, they often employ a blend of specialized networks designed for each data type. For instance, a CNN processes image data to extract relevant features, while a Transformer may process text data to comprehend its semantic meaning. These isolated features are then fused to create a holistic representation that captures the essence of the multifaceted input.

Fusion Techniques:

  • Concatenation: Simply stringing together features from different modalities.
  • Attention Mechanisms: Weighing the importance of features across modalities.
  • Hybrid Architectures: More complex operations that dynamically integrate features during processing.

Simplified Analogies

he Orchestra Analogy: Think of multimodal AI as an orchestra. In a traditional, single-modal AI model, it’s as if you’re listening to just one instrument—say, a violin. That’s beautiful, but limited. With a multimodal approach, it’s like having an entire orchestra—violins, flutes, drums, and so on—playing in harmony. Each instrument (or data type) brings its unique sound (or insight), and when combined, they create a richer, fuller musical experience (or analysis).

The Swiss Army Knife Analogy: A traditional, single-modal AI model is like a knife with just one tool—a blade for cutting. Multimodal AI is like a Swiss Army knife, equipped with various tools for different tasks—scissors, screwdrivers, tweezers, etc. Just as you can tackle a wider range of problems with a Swiss Army knife, multimodal AI can handle more complex queries by utilizing multiple types of data.

Real-World Applications

To give you an idea of its vast potential, let’s delve into a few applications:

  • Autonomous Vehicles: Sensor fusion leverages data from cameras, LiDAR, and radar to provide an exhaustive situational awareness.
  • Healthcare: Textual medical records can be complemented by imaging data for a more thorough diagnosis.
  • E-commerce: Recommender systems can incorporate user text reviews and product images for enhanced recommendations.

Google, with its multimodal capabilities in search algorithms, leverages both text and images to give you a more complete set of search results. Similarly, Tesla excels in implementing multimodal sensor fusion in its self-driving cars, capturing a 360-degree view of the car’s surroundings.

The importance of multimodal learning primarily lies in its ability to generate common representations across diverse inputs. For instance, in a healthcare application, a multimodal model might align a patient’s verbal description of symptoms with medical imaging data to provide a more accurate diagnosis. These aligned representations enable the model to understand the subject matter more holistically, leveraging complementary information from different modalities for a more rounded view.

Multimodal AI has immense promise but is also subject to ongoing research to solve challenges like data alignment and modality imbalance. However, with advancements in deep learning and data science, this field is poised for significant growth.
So there you have it, a sweeping yet accessible view of what multimodal AI entails. With the ability to integrate a medley of data types, this technology promises a future where AI is not just smart but also insightful and contextually aware.

Multimodal Artificial Intelligence (AI) summary:

  • Single-Modality Learning: Handles only one type of input.
  • Multimodal Learning: Can process multiple types of inputs like text, audio, and images.
  • Cross-Modality Benefits: Learning from one modality can enhance performance in another.
  • Common Responses: Capable of generating unified outputs irrespective of input type.
  • Common Representations: Central to the multimodal approach, allowing for a holistic understanding of diverse data types.

Multimodal learning offers an evolved, nuanced approach to machine learning. By fostering common representations across a spectrum of inputs, these models are pushing the boundaries of what AI can perceive, interpret, and act upon.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

The Role of Artificial Intelligence in Antivirus Software

Artificial Intelligence

In an age where your grandma’s fridge can get hacked (no kidding!), the fight against cyber threats requires some next-level tools. Forget those boring, monotonous database updates; the real action happens when artificial intelligence steps into the arena of antivirus defense.

Just think about it. The digital universe is expanding faster than a galaxy on steroids. How’s the average Joe (or the best antivirus app) supposed to keep up with every new threat out there? Enter AI, the geeky hero we didn’t know we needed with the best apps.

Understanding AI in the Context of AV

  • Definition: At its core, AI, or Artificial Intelligence, is about creating algorithms that allow computers to perform tasks that typically require human intelligence. This can be anything from understanding natural language to recognizing patterns.
  • AV’s Need for AI: Traditional antivirus software relies on signature-based detection. This means they have to know a virus to spot a virus. Not super-efficient, right? AI flips the script. Instead of waiting for a known threat, AI helps AV software predict and prevent unknown threats.

Imagine a bustling town square from a bygone era. Going about their business, each individual is distinct and recognizable to the town’s vigilant watchman. This watchman knows everyone by face, just as traditional antivirus software knows viruses by their signatures. One day, a stranger, unknown to the watchman, strolls into the square with nefarious intent.

The watchman is caught off guard because he doesn’t recognize this new face. This stranger represents a new, unknown threat—similar to a new malware that hasn’t been recorded in the antivirus database.

Now, envision a different scenario. This time, the town has an oracle—a wise sage who doesn’t just recognize faces but senses intent, behavior, and even the subtlest of changes in the town’s patterns. This oracle doesn’t need to have met someone before to gauge if they might pose a threat. Instead, she assesses actions, behavior, and anomalies to make predictions. This is the power of AI in antivirus software.

Just as the oracle can preemptively identify potential dangers based on behavior and patterns, AI-backed AV systems can detect and deflect threats before they become recognized entities. The difference between reactive and genuinely proactive gives us a dynamic defense in an ever-evolving digital world.

Moreover, mobile devices have not been immune to the escalating threats. Given the rapid adoption of smartphones for both personal and professional tasks, the need for robust antivirus protection has never been more critical.

AI technologies are now increasingly integrated into antivirus apps specifically designed for mobile platforms. These apps leverage AI to offer real-time protection, especially vital when using public Wi-Fi or downloading apps from third-party stores.

The Goodies AI Brings to AV Software

  1. Predictive Analysis: AI can analyze patterns and behaviors to predict future threats. It’s like having a crystal ball that alerts you to threats before they strike.
  2. Adaptive Learning: Remember those high school days when learning was a drag? AI doesn’t. Machine learning, a subset of AI, allows antivirus software to continually learn from new data, getting smarter over time.
  3. Real-time Threat Detection: AI can analyze vast amounts of data in milliseconds. This means detecting and blocking threats in real-time, ensuring your data’s safety is no joke.
  4. Mobile Optimization: The beauty of AI lies in its scalability. AI-backed antivirus software isn’t just for your desktop; it’s also fine-tuned for mobile applications. These apps are light on system resources but heavy on protection, providing a shield against threats without draining your battery life.
  5. Reduced False Positives: Ain’t nobody got time for those annoying false alerts. With AI’s precision, the rate of false positives takes a nosedive.

Use Cases for AI-Powered Antivirus Software:

  • Personal: Have you ever clicked on a sketchy link by accident? AI can quickly analyze the link’s behavior, blocking malicious activities before they wreak havoc on your files.
  • Business: Imagine a financial firm with tons of sensitive data bombarded by sophisticated phishing attacks. AI-backed AV software can detect anomalies and thwart these advanced threats, safeguarding the company’s reputation and assets.
  • Mobile: Let’s say you’re an avid traveler who relies on public Wi-Fi or a parent giving your child their first smartphone. AI-backed antivirus mobile apps offer peace of mind by actively monitoring for suspicious behavior, whether you’re surfing the web or installing a new app.
  • Institutional: Think of universities with research data. AI-powered AV software can ensure intellectual properties and students’ personal data remain untouched by cyber goons.

Comparative Table: Traditional AV vs. AI-Powered AV

Feature Traditional AV AI-Powered AV AI-Powered Mobile AV
Detection Method Signature-Based Behavior-Based Behavior-Based
Learning Static Adaptive Adaptive
Threat Response Time Minutes to Hours Milliseconds Milliseconds
False Positives Common Rare Even Rarer
Platform Desktop Desktop Mobile

To wrap things up with some techy swagger: While traditional AV is like the old guard, AI-powered AV is like the savvy, next-gen superhero. It’s not about ditching the old but leveling up with the times. The smarter choice thinks, learns, and adapts — and it does so across all your devices. Because in this digital cosmos, only the smartest survive, be it on your desktop or in your pocket!

Filed Under: Guides, Technology News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

According to research, artificial intelligence called ‘Sense of Urgency’ aids clinicians in predicting their patients’ danger of passing away.

Researchers at OSF HealthCare want to make sure that patients have “important conversations” about their plans for the end of their lives.
Only 22% of Americans write down their end-of-life plans, according to study. A team at OSF HealthCare in Illinois is using artificial intelligence to help doctors figure out which patients are more likely to die during their hospital stay.

A news statement from OSF says that the team made an AI model that can predict a patient’s risk of dying between five and ninety days after being admitted to the hospital.

The goal is for the doctors to be able to talk to these people about important end-of-life issues.

In an interview with Fox News Digital, lead study author Dr. Jonathan Handler, an OSF HealthCare senior fellow of innovation, said, “It’s a goal of our organization that every single patient we serve would have their discussions about advanced care planning written down so that we could give them the care they want, especially at a sensitive time like the end of their life when they may not be able to talk to us because of their medical condition.”

If a patient is asleep or on a respirator, for example, it may be too late for them to tell their doctors what they want.
Handler said that in an ideal world, the mortality prediction would keep patients from dying before they got the full benefits of the hospice care they could have gotten if their goals had been written down sooner.

Since the average length of a hospital stay is four days, the researchers decided to start the model at five days and end it at 90 days to give a “sense of urgency,” as one researcher put it.

The AI model was tried on a set of data from more than 75,000 people of different races, cultures, genders, and social backgrounds.

The study, which was just released in the Journal of Medical Systems, showed that the death rate for all patients was 1 in 12.

But for people who the AI model said were more likely to die while they were in the hospital, the death rate went up to one in four, which is three times higher than the average.
The model was tried before and during the COVID-19 pandemic, and the results were almost the same, according to the study team.

Handler said that 13 different kinds of patient information were used to teach the patient death estimator how to work.

“That included clinical trends, like how well a patient’s organs are working, as well as how often and how intensely they’ve had to go to the health care system and other information, like their age,” he said.
Handler said that the model gives a doctor a chance, or “confidence level,” as well as an account of why the patient has a higher-than-normal chance of dying.

“At the end of the day, the AI takes a lot of information that would take a clinician a long time to gather, analyze, and summarize on their own, and then presents that information along with the prediction to allow the clinician to make a decision,” he said.
Handler said that a similar AI model made at NYU Langone gave the OSF researchers an idea of what they could do.

“They had made a death predictor for the first 60 days, which we tried to copy,” he said.

“We think our population is very different from theirs, so we used a different kind of predictor to get the results we wanted, and we were successful.”

“Then, the AI uses this information to figure out how likely it is that the patient will die in the next five to ninety days.”

The forecast “isn’t perfect,” Handler said. Just because it shows a higher risk of death doesn’t mean it will happen.

“But at the end of the day, the goal is to get the clinician to talk, even if the predictor is wrong,” he said.
“In the end, we want to do what the patient wants and give them the care they need at the end of life,” Handler said.
OSF is already using the AI tool because, as Handler said, the health care system “tried to integrate it as smoothly as possible into the clinicians’ workflow in a way that helps them.”

Handler said, “We are now in the process of optimizing the tool to make sure it has the most impact and helps patients and clinicians have a deep, meaningful, and thoughtful conversation.”

Expert on AI points out possible limits

Dr. Harvey Castro, a board-certified emergency medicine doctor in Dallas, Texas, and a national speaker on AI in health care, said that OSF’s model may have some benefits, but it may also have some risks and limits.

Possible fake results is one of them. “If the AI model wrongly predicts that a patient is at a high risk of dying when they are not, it could cause the patient and their family needless stress,” Castro said.
Castro also brought up the risk of false positives.

“If the AI model doesn’t find a patient who is at high risk of dying, important conversations about end-of-life care might be put off or never happen,” he said. “If this happens, the patient might not get the care they would have wanted in their last days.”

Castro said that other possible risks include relying too much on AI, worrying about data privacy, and the possibility of bias if the model is built on a small set of data. This could lead to different care advice for different patient groups.

The expert said that these kinds of models should be used with human contact.

“End-of-life conversations are difficult and can have big effects on a patient’s mind,” he said. “People who work in health care should use AI predictions along with a human touch.”

The expert said that these models need to be constantly checked and given feedback to make sure they are still accurate and useful in the real world.

“It is very important to study AI’s role in health care from an ethical point of view, especially when making predictions about life and death.”

Categories
News

German military invests millions on an artificial intelligence “environment” for weapons testing that might completely alter battle.

The GhostPlay platform uses “third-wave” AI systems that make decisions that seem “human-like.”
Germany has put a lot of money into an artificial intelligence (AI) virtual training area that some people call a military “metaverse.” Officials say this will help them figure out how to fight in the future.

GhostPlay project head Gary Schaal, a professor at Helmut Schmidt University in Hamburg, said in a news statement, “We compete with the big ones in the industry.” “Our unique Selling point is that we can move quickly and show results quickly.”

To create the virtual battlefield GhostPlay, developer 21strategies brought together a group of defense experts and start-ups. This lets developers try out different weapons and systems in a risk-free environment.
Defense News said that the German Defense Ministry paid for the project as part of a 500 million euro ($540 million) spending plan called COVID-19. The plan was meant to help the country’s high-tech defense business get back on its feet.
On the GhostPlay website, the tool is called a “simulation environment with AI-based decision-making at machine speed.”

“Complex military battle scenarios can be simulated to find new, better ways to act,” the company wrote. “As a result, flexibility and superiority can be achieved on the strategic, tactical, and operational levels.”

The creators said that the models can create “unpredictable” situations that make testing and planning for the military more detailed and thorough.
One of the things that makes this program stand out is that it uses “third-wave” algorithms, which, according to 21strategies CEO Yvonne Hofstetter, make the virtual units make more “human-like” decisions.

She said that the second-wave algorithms just improve or speed up the decision-making process, but that the third-wave algorithms will help make new situations and decide on new moves.

Hofstetter says that the platform also tries to recreate environments “down to the last leaf.” It does this by putting together satellite pictures and local files about everything from houses to plants.
“There is enough information… it’s kind of scary, really,” Hofstetter said.

The most interesting thing the platform has done recently is look into how to improve swarm tactics, especially lingering weapons. The Office of Army Development has worked with the tool because it can make thorough simulations of the locations where the weapons would be used.

Hensoldt, a multinational company that helps fund the GhostPlay platform, said in a press release, “To best enable highly complex defense systems, we need to master artificial intelligence in its entirety. To do this, we develop a lot of AI skills in-house and add to them in a very targeted way.”