Categories
Entertainment

AirTouch de Neural Lab proporciona control de gestos para dispositivos Windows y Android usando solo una cámara web

[ad_1]

Algunos de Las mejores tecnologías que vemos en CES Parece sacado directamente de la ciencia ficción. Ayer en CES 2025, probé la tecnología AirTouch de Neural Lab, que te permite interactuar con la pantalla usando solo gestos con las manos, exactamente como son las películas. Informe de minoría y hombre de hierro una promesa. Naturalmente, muchas empresas han introducido diferentes formas de control por gestos. Microsoft Kinect Es uno de los primeros ejemplos del Apple Watch. Función de doble clic Y Visión Pro Gestos de pellizco Son sólo dos de muchas iteraciones actuales. Pero me impresionó lo bien que se entregó AirTouch y, a diferencia de la mayoría de la tecnología de gestos que existe, no requiere equipo especial (solo una cámara web estándar) y funciona con una amplia gama de dispositivos.

Neural Lab es compatible con tabletas, PC y cualquier dispositivo que ejecute al menos Android 11, Windows 10 y versiones posteriores, o Linux. La tecnología se desarrolló teniendo en cuenta la accesibilidad después de que uno de los fundadores tuviera problemas para mantenerse en contacto con sus padres en el extranjero porque navegar por el software de videoconferencia era demasiado difícil para la generación anterior. El representante de Neural Labs con el que hablé agregó que sus padres prefieren usar un iPad en lugar de una combinación de computadora, mouse y teclado porque los controles táctiles son mucho más intuitivos. Con AirTouch, pueden usar su televisor tal como usan una tableta.

Además de la accesibilidad, también existen muchas aplicaciones comerciales, como permitir a los cirujanos procesar imágenes por resonancia magnética sin tocar nada o un escenario más común como navegar a través de diapositivas en una presentación.

AirTouch rastrea los movimientos de las manos en 3D y los cambios de la mirada para reconocer la intención, lo que le permite ignorar gestos extraños. Actualmente admite nueve gestos y la personalización permite a los usuarios programar hasta 15 gestos.

Probé dos demostraciones: una pantalla 3D con una rana arbórea animada y una pantalla que muestra una página web en el navegador. En la pantalla 3D, un dedo dejó caer una piña sobre la cabeza de la rana, dos dedos dejaron caer una bellota, un pulgar hacia arriba hizo girar a la rana y el gesto tranquilo de un lobo la trajo de regreso. Me tomó 15 segundos aprender y usar los cuatro gestos y pronto estaba bañando a la pobre rana con bellotas como una ardilla de mal genio.

Fue casi tan fácil (aunque no tan divertido) controlar la pantalla que muestra el navegador web. Al mover mi mano, arrastré el cursor por la pantalla y el dial tomó el lugar del clic. Pude desplazarme por el sitio de transmisión, seleccionar algo para reproducir, pausar y reproducir nuevamente a los pocos segundos de aprender los movimientos de la mano. Hubo algunos casos en los que mis movimientos no fueron lo que esperaba, pero después de algunos intentos, comencé a dominar los controles.

AirTouch ahora está disponible como una suscripción mensual de $30 para individuos (y $300 por mes para empresas). Neural Labs dice que solo lleva cinco minutos instalar el software en cualquier dispositivo compatible.

[ad_2]

Source Article Link

Categories
News

OpenAI insider discusses AGI and Scaling Laws of Neural Nets

OpenAI insider discusses AGI and Scaling Laws of Neural Nets

Imagine a future where machines think like us, understand like us, and perhaps even surpass our own intellectual capabilities. This isn’t just a scene from a science fiction movie; it’s a goal that experts like Scott Aaronson from OpenAI are working towards. Aaronson, a prominent figure in quantum computing, has shifted his focus to a new frontier: Artificial General Intelligence (AGI). This is the kind of intelligence that could match or even exceed human brainpower. Wes Roth explores deeper into this new technology and what we can expect in the near future from OpenAI and others developing AGI and Scaling Laws of Neural Nets.

At OpenAI, Aaronson is deeply involved in the quest to create AGI. He’s looking at the big picture, trying to figure out how to make sure these powerful AI systems don’t accidentally cause harm. It’s a major concern for those in the AI field because as these systems become more complex, the risks grow too.

Aaronson sees a connection between the way our brains work and how neural networks in AI operate. He suggests that the complexity of AI could one day be on par with the human brain, which has about 100 trillion synapses. This idea is fascinating because it suggests that machines could potentially think and learn like we do.

OpenAI AGI

There’s been a lot of buzz about a paper that Aaronson reviewed. It talked about creating an AI model with 100 trillion parameters. That’s a huge number, and it’s sparked a lot of debate. People are wondering if it’s even possible to build such a model and what it would mean for the future of AI. One of the big questions Aaronson is asking is whether AI systems like GPT really understand what they’re doing or if they’re just good at pretending. It’s an important distinction because true understanding is a big step towards AGI.

Here are some other articles you may find of interest on the subject of Artificial General Intelligence (AGI) :

Scaling Laws of Neural Nets

But Aaronson isn’t just critiquing other people’s work; he’s also helping to build a mathematical framework to make AI safer. This framework is all about predicting and preventing the risks that come with more advanced AI systems. There’s a lot of interest in how the number of parameters in an AI system affects its performance. Some people think that there’s a certain number of parameters that an AI needs to have before it can act like a human. If that’s true, then maybe AGI has been possible for a long time, and we just didn’t have the computing power or the data to make it happen.

Aaronson also thinks about what it would mean for AI to reach the complexity of a cat’s brain. That might not sound like much, but it would be a big step forward for AI capabilities. Then there’s the idea of Transformative AI (TII). This is AI that could take over jobs that people do from far away. It’s a big deal because it could change entire industries and affect jobs all over the world.

People have different ideas about how many parameters an AI needs to reach AGI. These estimates are based on ongoing research and a better understanding of how neural networks grow and change. Aaronson’s own work on the computational complexity of linear optics is helping to shed light on what’s needed for AGI.

Scott Aaronson’s insights give us a peek into the current state of AGI research. The way parameters in neural networks scale and the ethical issues around AI development are at the heart of this fast-moving field. As we push the limits of AI, conversations between experts like Aaronson and the broader AI community will play a crucial role in shaping what AGI will look like in the future.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

ChatGPT and how Neural Networks learned to talk

ChatGPT and how Neural Networks learned to talk a 30 year journey

Thanks to the incredible advancements in neural networks and language processing computers can understand and respond to human language just as another person might. The journey from the first moments of doubt to the current state of achievement is a tale of relentless innovation and discovery. The Art of the Problem YouTube channel has created a fantastic history documenting the 30 year journey that has brought us to ChatGPT-4 and other AI models.

Back in the 1980s, the notion that machines could grasp the nuances of human language was met with skepticism. Yet, the evolution of neural networks from basic, single-purpose systems to intricate, versatile models has been nothing short of remarkable. A pivotal moment came in 1986 when Michael I. Jordan introduced recurrent neural networks (RNNs). These networks had memory cells that could learn sequences, which is crucial for language understanding.

The early 1990s saw Jeffrey Elman’s experiments, which showed that neural networks could figure out word boundaries and group words by meaning without being directly told to do so. This discovery was a huge step forward, suggesting that neural networks might be able to decode language structures on their own.

How Neural Networks learned to talk

Here are some other articles you may find of interest on the subject of neural networks :

As we moved into the 2010s, the push for larger neural networks led to improved language prediction and generation abilities. These sophisticated models could sift through massive data sets, learning from context and experience, much like how humans learn.

Then, in 2017, the Transformer architecture came onto the scene. This new method used self-attention layers to handle sequences all at once, effectively overcoming the memory constraints of RNNs. The Transformer model was the foundation for the Generative Pretrained Transformer (GPT) models.

GPT models are known for their incredible ability to learn without specific examples, following instructions and performing tasks they haven’t been directly trained on. This was a huge leap forward in AI, showing a level of adaptability and understanding that was once thought impossible.

ChatGPT, a variant of these models, became a tool that many people could use, allowing them to interact with an advanced language model. Its ability to hold conversations that feel human has been impressive, indicating the enormous potential of these technologies.

One of the latest breakthroughs is in-context learning. This allows models like ChatGPT to take in new information while they’re being used, adapting to new situations without changing their underlying structure. This is similar to how humans learn, with context playing a vital role in understanding and using new knowledge.

However, the rapid progress has sparked a debate among AI experts. Are these models truly understanding language, or are they just simulating comprehension? This question is at the heart of discussions among professionals in the field.

Looking ahead, the potential for large language models to act as the basis for a new type of operating system is significant. They could transform tasks that computers typically handle, marking a new era of how humans interact with machines.

The road from initial doubt to today’s advanced language models has been long and filled with breakthroughs. The progress of neural networks has transformed language processing and paved the way for a future where computers might engage with human language in ways we never thought possible. The transformative impact of these technologies continues to reshape our world, with the promise of even more astounding advancements on the horizon.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.