Categories
Life Style

Google reduce el ranking de búsqueda de porno profundo

[ad_1]

Google Aborda el problema de la pornografía profunda y ahora está reduciendo la pornografía sintética o generada por IA. Sus rankings de búsqueda.

portavoz de Google Lo confirmó a Bloomberg La compañía está trabajando para reducir este tipo de contenido en los rankings de búsqueda de su motor, “continuando reduciendo la aparición de pornografía sintética involuntaria en las búsquedas y desarrollando más salvaguardas a medida que se desarrolla este espacio”.

El tráfico a sitios que contienen pornografía no consensuada y generada por IA en Google disminuyó el mes pasado: por ejemplo, dos de los sitios deepfakes más destacados experimentaron una disminución del 21 por ciento y del 25 por ciento en el tráfico de búsqueda de escritorio en los EE. UU. diez. días en mayo, respecto a la media de los seis meses anteriores.

Sin embargo, incluso con el plan de Google de degradar estos resultados de búsqueda, los sitios aún pueden descubrirse.

Google dice que la compañía está “desarrollando activamente nuevas protecciones de búsqueda para ayudar a las personas afectadas por este contenido, según nuestras políticas existentes”. Mashable se ha puesto en contacto con Google para hacer comentarios.

Velocidad de la luz triturable

A principios de este mes, Google Ha actualizado sus políticas Evitar que los anunciantes promocionen sitios web que faciliten la creación de pornografía deepfake. Actualizarla Prohibición de larga data de la publicidad sexualmente explícita Se implementará el 30 de mayo y está dirigido a “sitios o aplicaciones que afirman crear pornografía deepfake, instrucciones sobre cómo crear pornografía deepfake o servicios de respaldo o comparación con pornografía deepfake”. Los anunciantes que infrinjan esta política se enfrentarán a la suspensión y a la imposibilidad de volver a anunciarse en Google.

Google también lo ofrece Formulario de aplicación Eliminar pornografía falsa involuntaria de los resultados de búsqueda.

El año pasado, Bloomberg descubrió que Google Uno de los mayores conductores de tráfico. A sitios que promueven pornografía sintética o generada por IA. Las búsquedas de celebridades o creadores de contenido con la palabra “deepfake” a menudo muestran sitios que se especializan en ese contenido, lo que les envía millones de visitas. a Un informe reciente de Wired También reveló que miles de mujeres se han quejado ante Google sobre este tipo de sitios. En enero de este año, NBC informó Ese porno profundo de celebridades no consensuado apareció en la parte superior del motor de búsqueda Bing de Google y Microsoft.

Se ha considerado la importancia y prevalencia de los deepfakes. crisis, especialmente para mujeres y personas de géneros marginados. Sólo este año se han hecho deepfakes de figuras famosas como Taylor Swift Y Jenna Ortega Provocó un debate sobre el tema y el papel que desempeñan las grandes empresas de tecnología. Otras plataformas importantes, incluidas Metaaplicaciones Y X (oficialmente Twitter)fueron acusados ​​de complicidad al permitir la difusión de pornografía no consentida.

Si ha sido agredido sexualmente, llame a la línea directa gratuita y confidencial de agresión sexual nacional al 1-800-656-HOPE (4673), o acceda a ayuda en línea las 24 horas, los 7 días de la semana, visitando en línea.rainn.org.



[ad_2]

Source Article Link

Categories
News

How to fine tune AI models to reduce hallucinations

How to fine tune AI models to reduce hallucinations

Artificial intelligence (AI) is transforming the way we interact with technology, but it’s not without its quirks. One such quirk is the phenomenon of AI hallucinations, where AI systems, particularly large language models like GPT-3 or BERT, sometimes generate responses that are incorrect or nonsensical. For those who rely on AI, it’s important to understand these issues to ensure the content produced by AI remains accurate and trustworthy. however these can be reduced by a number of techniques when fine-tuning AI models.”

AI hallucinations can occur for various reasons. Sometimes, they’re the result of adversarial attacks, where the AI is fed misleading data on purpose. More often, they happen by accident when the AI is trained on huge datasets that include errors or biases. The way these language models are built can also contribute to the problem.

To improve the reliability of AI outputs, there are several strategies you can use. One method is temperature prompting, which controls the AI’s creativity. Setting a lower temperature makes the AI’s responses more predictable and fact-based, while a higher temperature encourages creativity, which might not always be accurate.

Fine tuning AI models to reduce hallucinations

Imagine a world where your digital assistant not only understands you but also anticipates your needs with uncanny accuracy. This is the promise of advanced artificial intelligence (AI), but sometimes, this technology can lead to unexpected results. AI systems, especially sophisticated language models like GPT-3 or BERT, can sometimes produce what are known as “AI hallucinations.”

These are responses that may be incorrect, misleading, or just plain nonsensical. For users of AI technology, it’s crucial to recognize and address these hallucinations to maintain the accuracy and trustworthiness of AI-generated content. IBM provides more information on what you can consider when fine tuning AI models to reduce hallucinations.

Here are some other articles you may find of interest on the subject of fine-tuning artificial intelligence models and large language models LLMs :

The reasons behind AI hallucinations are varied. They can be caused by adversarial attacks, where harmful data is intentionally fed into the model to confuse it. More commonly, they occur unintentionally due to the training on large, unlabeled datasets that may contain errors and biases. The architecture of these language models, which are built as encoder-decoder models, also has inherent limitations that can lead to hallucinations.

To mitigate these issues, there are several techniques that can be applied to fine-tune your AI model, thus enhancing the reliability of its output. One such technique is temperature prompting, which involves setting a “temperature” parameter to manage the AI’s level of creativity. A lower temperature results in more predictable, factual responses, while a higher temperature encourages creativity at the expense of accuracy.

Another strategy is role assignment, where the AI is instructed to adopt a specific persona, such as a technical expert, to shape its responses to be more precise and technically sound. Providing the AI with detailed, accurate data and clear rules and examples, a method known as data specificity, improves its performance on tasks that demand precision, like scientific computations or coding.

Content grounding is another approach that anchors the AI’s responses in domain-specific information. Techniques like Retrieval Augmented Generation (RAG) help the AI pull data from a database to inform its responses, enhancing relevance and accuracy. Lastly, giving explicit instructions to the AI, outlining clear dos and don’ts, can prevent it from venturing into areas where it may offer unreliable information.

Fine tuning AI models

Another tactic is role assignment, where you tell the AI to act like a certain type of expert. This can help make its responses more accurate and technically correct. You can also give the AI more detailed data and clearer instructions, which helps it perform better on tasks that require precision, like math problems or programming.

Content grounding is another useful approach. It involves tying the AI’s responses to specific information from a certain field. For example, using techniques like Retrieval Augmented Generation (RAG) allows the AI to use data from a database to make its responses more relevant and correct.

Reducing hallucinations in AI models, particularly in large language models (LLMs) like GPT (Generative Pre-trained Transformer), is crucial for enhancing their reliability and trustworthiness. Hallucinations in AI context refer to instances where the model generates false or misleading information. This fine-tuning AI models guide outlines strategies and considerations for fine-tuning AI models to minimize these occurrences, focusing on both technical and ethical dimensions.

1. Understanding Hallucinations

Before attempting to mitigate hallucinations, it’s essential to understand their nature. Hallucinations can arise due to various factors, including but not limited to:

  • Data Quality: Models trained on noisy, biased, or incorrect data may replicate these inaccuracies.
  • Model Complexity: Highly complex models might overfit or generate outputs based on spurious correlations.
  • Inadequate Context: LLMs might generate inappropriate responses if they misunderstand the context or lack sufficient information.

2. Data Curation and Enhancement

Improving the quality of the training data is the first step in reducing hallucinations.

  • Data Cleaning: Remove or correct inaccurate, biased, or misleading content in the training dataset.
  • Diverse Sources: Incorporate data from a wide range of sources to cover various perspectives and reduce bias.
  • Relevance: Ensure the data is relevant to the model’s intended applications, emphasizing accuracy and reliability.

3. Model Architecture and Training Adjustments

Adjusting the model’s architecture and training process can also help minimize hallucinations.

  • Regularization Techniques: Apply techniques like dropout or weight decay to prevent overfitting to the training data.
  • Adversarial Training: Incorporate adversarial examples during training to improve the model’s robustness against misleading inputs.
  • Dynamic Benchmarking: Regularly test the model against a benchmark dataset specifically designed to detect hallucinations.

4. Fine-tuning with High-Quality Data

Fine-tuning the pre-trained model on a curated dataset relevant to the specific application can significantly reduce hallucinations.

  • Domain-Specific Data: Use high-quality, expert-verified datasets to fine-tune the model for specialized tasks.
  • Continual Learning: Continuously update the model with new data to adapt to evolving information and contexts.

5. Prompt Engineering and Instruction Tuning

The way inputs (prompts) are structured can influence the model’s output significantly.

  • Precise Prompts: Design prompts to clearly specify the type of information required, reducing ambiguity.
  • Instruction Tuning: Fine-tune models using datasets of prompts and desired outputs to teach the model how to respond to instructions more accurately.

6. Post-Processing and Validation

Implementing post-processing checks can catch and correct hallucinations before the output is presented to the user.

  • Fact-Checking: Use automated tools to verify factual claims in the model’s output against trusted sources.
  • Output Filtering: Apply filters to detect and mitigate potentially harmful or nonsensical content.

7. Ethical Considerations and Transparency

  • Disclosure: Clearly communicate the model’s limitations and the potential for inaccuracies to users.
  • Ethical Guidelines: Develop and follow ethical guidelines for the use and deployment of AI models, considering their impact on individuals and society.

8. User Feedback Loop

Incorporate user feedback mechanisms to identify and correct hallucinations, improving the model iteratively.

  • Feedback Collection: Allow users to report inaccuracies or hallucinations in the model’s output.
  • Continuous Improvement: Use feedback to refine the data, model, and post-processing methods continuously.

By using these methods, you can enhance the user experience, fight the spread of misinformation, reduce legal risks, and build trust in AI models. As AI continues to evolve, it’s vital to ensure the integrity of these systems for them to be successfully integrated into our digital lives.

Implementing these fine tuning AI models strategies not only improves the user experience but also helps combat the spread of misinformation, mitigates legal risks, and fosters confidence in generative AI models. As AI technology progresses, ensuring the integrity of these models is crucial for their successful adoption in our increasingly digital world.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Reduce stress in your life with the S-NANO V2 EDC fidget slider

Reduce stress in your life with the S-NANO V2

Ant Design based in New York has once again returned to Kickstarter to launch its next-generation stress busting fidget slider in the form of the S-NANO V2. Perfect for an everyday carry (EDC) stress reducer the fidget slider offers a unique magnetic responsive and satisfying movement that allows you to relax and enjoy peace of mind with every touch, say it’s creators.

The S-NANO V2 EDC fidget slider is a carefully crafted stress-relief tool that cleverly combines various technologies to provide a calming, rhythmic distraction. This small, pocket-sized device is designed to help you cope with the constant influx of digital information that characterizes modern life.

Reduce stress

It uses magnetic technology to create a smooth sliding motion and unique sounds, providing a tactile experience that engages your senses and helps reduce stress. Early bird packages are now available for the pioneering project from roughly $89 or £73 (depending on current exchange rates).

The S-NANO V2 has been significantly upgraded from its previous version, with a fixed top cover, improved grip for better handling, and a smaller size for easier portability. The redesigned magnets and interchangeable panels allow for customization, ensuring the device can be tailored to your personal needs and preferences. Additionally, the device is designed for durability and easy cleaning, making it a practical choice for everyday use.

A notable feature of the S-NANO V2 is its compatibility with a tritium tube. This optional feature can emit a soft glow, adding another sensory element to the experience of using the device. The tritium tube is powered by Tritium Tube technology, a reliable light source that doesn’t require electricity or batteries, enhancing the device’s convenience and dependability.

EDC fidget slider

The S-NANO V2 is available in two material options: Titanium (Grade 5) and Zirconium. These materials were chosen for their strength and aesthetic appeal. The use of these materials showcases the advanced material technology employed in the design and construction of the S-NANO V2.

Assuming that the S-NANO V2 funding campaign successfully raises its required pledge goal and fullfilment progresses smoothly, worldwide shipping is expected to take place sometime around March 2024. To learn more about the S-NANO V2 EDC fidget slider project and how it can reduce stress  by providing something to distract your emotions browse the promotional video below.

Other articles you may find of interest on the subject of fidget spinners, EDC and stress relievers :

S-NANO V2

The S-NANO V2 EDC fidget slider is a highly advanced stress-relief tool that combines digital, magnetic, haptic, and material technologies to provide a calming, rhythmic distraction. Its unique design features, including magnetic levitation, interchangeable magnets and panels, and compatibility with a tritium tube, set it apart from other stress-relief devices on the market.

With its easy maintenance, compact size for portability, and availability in two attractive materials, the S-NANO V2 is a practical and stylish solution for those seeking a tactile way to reduce stress in the digital age. Its thoughtful design and advanced features make it more than just a tool – it’s a companion for navigating the challenges of modern life.

For a complete list of all available pledges, stretch goals, extra media and product specifications for the EDC fidget slider, jump over to the official S-NANO V2 crowd funding campaign page by navigating to the link below.

Source : Kickstarter

Disclaimer: Participating in Kickstarter campaigns involves inherent risks. While many projects successfully meet their goals, others may fail to deliver due to numerous challenges. Always conduct thorough research and exercise caution when pledging your hard-earned money.

Filed Under: Gadgets News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to reduce your Raspberry Pi 5 power consumption by 140x

How to reduce your Raspberry Pi 5 power consumption

The Raspberry Pi 5, a mini PC that is now available for purchase, has been praised for its enhanced capabilities and increased power when compared to the previous Raspberry Pi 4. However this increased processing power also draws high power consumption, especially in Power Off mode. This high power consumption is not a feature unique to the Raspberry Pi 5, but is also a characteristic of its predecessor, the Raspberry Pi 4. Both these models, by default, leave the System on a Chip (SoC) powered up in a shutdown state, leading to power consumption of 1.2-1.6W, even when no other peripherals are plugged in other than the power source.

The default setting of the Raspberry Pi 5 is a significant contributor to its high power consumption. This is due to the fact that some Hardware Attached on Top (HATs) experience issues if the 3v3 power rail is off, but the 5v is still active. As such, the Raspberry Pi 5 ships with the setting POWER_OFF_ON_HALT=0, causing it to continuously consume power.

Reduce Raspberry Pi 5 power consumption

Fortunately, a solution has been developed to mitigate this issue. Jeff Geerling, a well-known figure in the Raspberry Pi community, has developed a method to reduce the Raspberry Pi’s power consumption by up to 140 times while in Power Off mode. This solution involves editing the Electrically Erasable Programmable Read-Only Memory (EEPROM) configuration and adjusting the following settings: BOOT_UART=1, WAKE_ON_GPIO=0, POWER_OFF_ON_HALT=1.

The process of editing the EEPROM configuration is straightforward. After saving the configuration and rebooting, the power consumption should significantly decrease from 1-2W to 0.01W or less when shut down. This is a remarkable reduction, making the Raspberry Pi 5 much more energy-efficient in Power Off mode.

Reduce Raspberry Pi 5 power consumption

Importantly, the functionality of the Raspberry Pi 5 is not compromised by implementing this solution. The Raspberry Pi 5 can still boot with the POWER_OFF_ON_HALT setting, the power button still operates as expected, and the red LED remains illuminated when the device is shut down. Moreover, the Real-Time Clock (RTC) continues to keep time, indicating that watchdog-related functions should also continue to operate normally.

While the Raspberry Pi 5 does have a high power consumption in its default Power Off mode, this issue can be effectively addressed. By editing the EEPROM configuration, users can significantly reduce the device’s power consumption without compromising its functionalities. This solution represents a significant stride towards making the Raspberry Pi 5 more energy-efficient and environmentally friendly.

Image Source :  Jeff Geerling

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.