Categories
News

AgiBot Robotics abre un enorme conjunto de datos para entrenar robots humanoides

[ad_1]

Yibuti, chino inteligencia artificial (AI) y la compañía de robótica abrieron el lunes un conjunto de datos masivo que contiene datos de alta calidad sobre el entrenamiento de robots humanoides. Se dice que el conjunto de datos, denominado AgiBot World Alpha, fue recopilado de más de 100 robots en escenarios de la vida real. La compañía afirmó que este conjunto de datos puede ayudar a investigadores y desarrolladores a acelerar el proceso de entrenamiento de robots humanos mediante el uso de modelos de inteligencia artificial para alimentar esta información a programas de robots específicos. En particular, el conjunto de datos está actualmente alojado tanto en GitHub como en Hugging Face.

Lanzamiento de un enorme conjunto de datos de entrenamiento para robots humanoides

en un presione soltarLa compañía anunció su decisión de lanzar AgiBot World. Se dice que es un conjunto de datos de aprendizaje robótico a gran escala diseñado para robots humanoides multipropósito. Además del conjunto de datos, el sistema de código abierto también incluye plantillas básicas, puntos de referencia y un marco para ayudar a los investigadores a acceder a los datos.

Con la llegada de la IA generativa, el campo de la robótica también ha experimentado un gran impulso. Si bien los robots humanoides existen desde hace mucho tiempo, entrenar estas máquinas para realizar tareas ha sido complejo. Esto se debe a que el software inteligente que actúa como cerebro del robot debe aprender y comprender diferentes escenarios y cómo navegar a través de ellos. Esto implica aprender miles de movimientos y combinaciones de movimientos y comprender cuándo aplicar cada movimiento.

Como resultado, el proceso de entrenamiento fue muy lento y generalmente se centró en una tarea especializada en lugar de movimientos de propósito general. Sin embargo, la IA generativa ha brindado a los investigadores la opción de hacer que el software sea más inteligente utilizando marcos neuronales. Esto permite a los robots comprender el contexto de una situación y resolverla procesando una gran cantidad de información casi en tiempo real.

Pero este crecimiento también ha puesto de relieve otra brecha en la robótica: la falta de datos de alta calidad. El entrenamiento de robots generalmente se lleva a cabo en entornos controlados y áreas aisladas para permitir a los investigadores monitorear los robots y realizar los cambios necesarios. Como resultado, los datos de entrenamiento que incluyan escenarios del mundo real son escasos.

El conjunto de datos de AgiBot World llena este importante vacío. La compañía afirmó que el conjunto de datos de código abierto incluye más de 1 millón de trayectorias de 100 bots. También cubre más de 100 escenarios del mundo real en cinco áreas objetivo. También incluye movimientos complejos como manipulación de precisión, uso de herramientas y colaboración entre múltiples robots.

Se puede acceder a este conjunto de datos desde GitHub de AgiBot. existente O su cara de abrazo página. Sin embargo, el conjunto de datos solo está disponible bajo la licencia Creative Commons CC BY-NC-SA 4.0, que permite el uso académico y de investigación, pero no permite casos de uso comercial.

Para lo último Noticias de tecnología y ReseñasSiga Gadgets 360 en incógnita, Facebook, WhatsApp, Temas y noticias de google. Para ver los últimos vídeos sobre gadgets y tecnología, suscríbete a nuestro canal. canal de youtube. Si quieres saber todo sobre los top influencers, sigue nuestra web ¿Quién es ese 360? en Instagram y YouTube.


Motorola Edge 50 Pro recibe la actualización de Android 15 con animaciones mejoradas; Los usuarios informan problemas después de la actualización.



[ad_2]

Source Article Link

Categories
Entertainment

Service Robotics y Wing copilotarán un piloto de entrega con drones en Dallas

[ad_1]

Una nueva empresa conjunta entre los robots de reparto en la acera Service Robotics y el servicio de drones Wing de Alphabet llevará a cabo una doble prueba. Ambas compañías de tecnología esperan que los drones y los vuelos en las aceras puedan cubrir áreas que sus contrapartes no pueden y acelerar los tiempos de entrega.

TechCrunch Service Robotics y Wing comenzarán a realizar entregas en Dallas en algún momento de los próximos meses, dijo. La prueba incluirá un número selecto de pedidos de clientes entregados mediante una combinación de robots y drones en las aceras.

La cobertura es uno de los mayores desafíos que enfrenta la entrega con drones. Los drones sólo pueden viajar una cierta distancia desde su sede. Los drones de pavimento pueden tener dificultades para navegar en áreas densamente pobladas y en algunos terrenos rocosos. Las empresas de drones a menudo tienen que mejorar sus instalaciones para afrontar estas distancias y obstáculos.

Wing y serv Robotics realizarán entregas en Dallas como parte de un nuevo programa piloto. Wing y serv Robotics realizarán entregas en Dallas como parte de un nuevo programa piloto.

Ala

La idea de Service Robotics y Wing es utilizar ambos tipos de robots de reparto para cubrir áreas que los servicios de reparto tradicionales no pueden. El robot de servicio en carretera recoge el pedido del restaurante y transporta la comida al AutoLoader donde… Ala de dronun dron que puede transportar cinco libras y volar a velocidades de hasta 65 mph, recoge el pedido y completa la entrega.

Se desconoce qué restaurantes o comerciantes formarán parte de la prueba, las áreas de Dallas donde realizarán entregas los drones y cualquier plan posterior a la prueba para la nueva flota de entrega de drones. El servicio de robótica ya está entregando pedidos a 300 restaurantes en Los Ángeles. Wing también trabaja y ha estado involucrado con Walmart en Dallas. Un programa piloto con DoorDash y Wendy's En Virginia.

Corrección, 2 de octubre de 2024, 1:00 p.m. ET: Esta historia informó originalmente que Service Robotics era una empresa de Uber. Originalmente, Service Robotics era parte de Uber, pero hace varios años se escindió y se convirtió en una empresa independiente. Pedimos disculpas por el error.

[ad_2]

Source Article Link

Categories
Life Style

AI and robotics demystify the workings of a fly’s wing

[ad_1]

Machine learning and robotics have shed new light on one of the most sophisticated skeletal structures in the animal kingdom: the insect wing hinge.

Unlike birds or bats, which evolved wings by adapting existing limbs, insect wings are wholly original appendages, and understanding how the complex hinge that links the insect wing to its body works has been a challenge.

But now a team of researchers have combined cutting edge imaging, machine learning and robotics to build a model that is shedding new light on the structure.

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

[ad_2]

Source Article Link

Categories
Life Style

AI & robotics briefing: LLMs harbour hidden racism

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

A laptop user is typing on the Google Bard AI chatbot webpage.

Some models are more likely to associate African American English with negative traits than Standard American English.Credit: Jaap Arriens/NurPhoto via Getty

Some large language models (LLMs), including those that power chatbots such as ChatGPT, are more likely to suggest the death penalty to a fictional defendant presenting a statement written in African American English (AAE) compared with one written in Standardized American English. AAE is a dialect spoken by millions of people in the United States that is associated with the descendants of enslaved African Americans. “Even though human feedback seems to be able to effectively steer the model away from overt stereotypes, the fact that the base model was trained on Internet data that includes highly racist text means that models will continue to exhibit such patterns,” says computer scientist Nikhil Garg.

Nature | 5 min read

Reference: arXiv preprint (not peer-reviewed)

A drug against idiopathic pulmonary fibrosis, created from scratch by AI systems, has entered clinical trials. Researchers at Insilico Medicine identified a target enzyme using an AI system trained on patients’ biomolecular data and scientific literature text. They then used a different algorithm to suggest a molecule that would block this enzyme. After some tweaks and laboratory tests, researchers had a drug that appeared to reduce inflammation and lung scarring. Medicinal chemist Timothy Cernak says he was initially cautious about the results because there’s a lot of hype about AI-powered drug discovery. “I think Insilico’s been involved in hyping that, but I think they built something really robust here.”

Chemical & Engineering News | 4 min read

Reference: Nature Biotechnology paper

Researchers built a pleurocystitid robot to investigate how the ancient sea creature moved. Pleurocystitids lived 450 million years ago and were probably one of the first echinoderms (animals including starfish and sea urchins) that could move from place to place using a muscular ‘tail’. The robot moved more effectively on a sandy ‘seabed’ surface when it had a longer tail, which matches fossil evidence that pleurocystitids evolved longer tails over time.

Ars Technica | 5 min read

Reference: PNAS paper

Image of Rhombot robot testbed inspired by anatomy of pleurocystitid.

The tail of the pleurocystitid replica (nicknamed ‘Rhombot’) was built out of wires that contract in response to electrical stimulation to simulate the flexibility and rigidity of a natural muscular tail.(Carnegie Mellon University – College of Engineering)

Features & opinion

Scientists hope that getting AI systems to comb through heaps of raw biomolecular data could reveal the answer to one of the biggest biological questions: what does it mean to be alive? AI models could, with enough data and computing power, build mathematical representations of cells that could be used to run virtual experiments — as well as map out what combination of biochemistry is required to sustain life. Researchers could even use it to design entirely new cells, that, for example, can explore a diseased organ and report on its condition. “It’s very ‘Fantastic Voyage’-ish,” admits biophysicist Stephen Quake. “But who knows what the future is going to hold?”

The New York Times | 9 min read

The editors of Nature Reviews Physics and Nature Human Behaviour have teamed up to explore the pros and cons of using AI systems such as ChatGPT in science communication. Apart from making up convincing inaccuracies, write the editors, chatbots have “an obvious, yet underappreciated” downside: they have nothing to say. Ask an AI system to write an essay or an opinion piece and you’ll get “clichéd nothingness”.

In Nature Human Behaviour, six experts discuss how AI systems can help communicators to make jargon understandable or translate science into various languages. At the same time, AI “threatens to erase diverse interpretations of scientific work” by overrepresenting the perspectives of those who have shaped research for centuries, write anthropologist Lisa Messeri and psychologist M. J. Crockett.

In Nature Reviews Physics, seven other experts delve into the key role of science communication in building trust between scientists and the public. “Regular, long-term dialogical interaction, preferably face-to-face, is one of the most effective ways to build a relationship based on trust,” notes science-communication researcher Kanta Dihal. “This is a situation in which technological interventions may do more harm than good.”

Nature Reviews Physics editorial | 4 min read, Nature Human Behaviour feature | 10 min read & Nature Reviews Physics viewpoint | 16 min read

Technology journalist James O’Malley used freedom-of-information requests to unveil how one of London’s Underground stations spent a year as a testing ground for AI-powered surveillance. Initially, the technology was meant to reduce the number of people jumping the ticket barriers, but it was also used to alert staff if someone had fallen over or was spending a long time standing close to the platform edge. Making every station ‘smart’ would undoubtedly make travelling safer and smoother, argues O’Malley. At the same time, there are concerning possibilities for bias and discrimination. “It would be trivial from a software perspective to train the cameras to identify, say, Israeli or Palestinian flags — or any other symbol you don’t like.”

Odds and Ends of History blog | 14 min read

Image of the week

An animated sequence of the Jellyfish biohybrid robot swimming with an attached hemi-ellipsoid forebody.

Simon R Anuszczyk and John O Dabiri/Bioinspir. Biomim. (CC BY 4.0)

A 3D-printed ‘hat’ allows this cyborg jellyfish to swim almost five times faster than its hat-less counterparts. The prosthesis could also house ocean monitoring equipment such as salinity, temperature and oxygen sensors. Scientists use electronic implants to control the animal’s speed and eventually want to make it fully steerable, in order to gather deep ocean data that can otherwise only be obtained at great cost. “Since [jellyfish] don’t have a brain or the ability to sense pain, we’ve been able to collaborate with bioethicists to develop this biohybrid robotic application in a way that’s ethically principled,” says engineer and study co-author John Dabiri. (Popular Science | 3 min read)

Reference: Bioinspiration & Biomimetics paper

Quote of the day

Machine-learning engineer Rick Battle says that chatbots’ finicky and unpredictable performance depending on how they’re prompted makes sense when thinking of them as algorithmic models rather than anthropomorphized entities. (IEEE Spectrum | 12 min read)

[ad_2]

Source Article Link

Categories
News

OpenAI and 1X Robotics autonomous robots will change the world

OpenAI and 1X Robotics autonomous robots

Yesterday  in a groundbreaking collaboration, OpenAI and 1X Robotics have unveiled a new robot that is capturing the attention of the tech world. This robot, known as the EVE, is a fully autonomous robot that can operate on its own, without the need for human guidance. It’s a significant step forward in the field of humanoid robots, and it’s sparking conversations about the future of artificial intelligence and the role of robots in our daily lives.

The EVE robot is unique because it can charge itself, which means it doesn’t need humans to keep it powered. This self-charging ability is a major development, as it allows the robot to work for extended periods without interruption. It’s a key feature that could lead to robots being more integrated into various settings, from homes to businesses, without the need for constant human supervision.

What sets the EVE robot apart from earlier models is its ability to respond to its environment in real time. This is a big deal because it means the robot can handle tasks that require immediate action, much like a human would. The robot’s quick reflexes are made possible by its advanced neural networks, which process information directly, allowing it to learn and adapt on the fly. This is a departure from traditional programming methods and is essential for the robot to perform a wide range of tasks.

EVE autonomous humanoid robot

The EVE robot also boasts a new hand design that gives it the ability to grip and manipulate objects in a way that’s different from human hands but still highly effective. This innovation expands the types of tasks the robot can do and improves its overall functionality. Additionally, the robot’s vision-based neural network processes visual information at a rapid pace, enabling it to quickly adapt to changes in its environment. This is crucial for tasks that require precision and fast reactions.

A new kind of software engineer, known as a “Software 2.0 Engineer,” is emerging to work with these advanced robots. These engineers train the robot’s neural networks using data, moving away from traditional coding. This shift is vital for the development of autonomous robotic systems and represents a new direction in software development.

OpenAI invests in robotics

The partnership between OpenAI and 1X Robotics is strategic, aiming to combine robotics with AI to create more sophisticated embodied AI systems. Experts in the industry recognize the potential impact of the EVE robot’s self-charging feature on the future of robotics. To support the ongoing development of the EVE and the upcoming bipedal android model Neo, 1X Robotics has secured funding, including a significant investment from OpenAI. Neo is designed for domestic assistance and is capable of performing a wide range of tasks, which could transform the concept of home automation.

In some instances, Neo could be operated remotely by human controllers, who would manage its vision and movements. This opens up the possibility of new job sectors where operators can control robots from afar, extending human capabilities into different environments. The integration of humanoid robots into everyday life is promising, with potential uses in both residential and commercial areas. As these robots become more common, they are expected to create new job sectors focused on their management and maintenance, changing the workforce and how we interact with technology.

The joint effort by 1X Robotics and OpenAI to create the EVE robot is a notable step toward more advanced artificial intelligence. With capabilities like real-time operation, self-charging, and sophisticated neural networks, the EVE robot is poised to become a valuable tool in both home and business settings, marking the beginning of a new era in autonomous robotics.

OpenAI’s venture into autonomous robotics marks a turning point in the industry. These robots’ ability to learn and adapt through advanced neural networks and visual data processing suggests a new era for task automation and physical labor. As these robots become more integrated into various sectors, they’re set to not only change the nature of work but also create new job opportunities in the ever-evolving world of robotics.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Combining robotics and AI to create humanoid robots

Combining robotics and AI to create humanoid robots

The fusion of robotics and artificial intelligence (AI) to create humanoid robots represents a frontier of innovation that captivates both the imagination and the intellect. This fascinating interplay of disciplines aims to craft machines that not only mimic human appearance but also exhibit an unprecedented level of autonomy and intelligence. You’ll be pleased to know that this article delves into the core aspects of this subject, offering an insightful exploration suitable for both enthusiasts and professionals.

These fields are not just changing the way we think about machines, but they are also reshaping our daily lives and work environments. One of the most talked-about developments in this area is Tesla’s humanoid robot, Optimus. This robot, which has been designed to handle routine tasks, has recently shown off its improved ability to walk.

Elon Musk, the CEO of Tesla, has a vision where robots like Optimus will take over the repetitive tasks that humans currently do, freeing us up for more creative endeavors. The latest version of this robot, known as Optimus Gen 2, has demonstrated new features that suggest robots are becoming more independent and capable.

Figure AI humanoid robot

For those who follow the business aspects of technology, there’s an exciting development involving a robotics startup called Figure AI. This company is currently in talks to secure a large amount of funding, with tech giants Microsoft and OpenAI showing interest. If successful, this deal could increase Figure AI’s value to $2 billion.

Although the deal isn’t finalized, Figure AI has already started working with BMW. They are planning to bring autonomous humanoid robots into the car manufacturing process, which could revolutionize the way cars are made by automating complex tasks and making production more efficient. In the demonstration video below you can see the Figure AI humanoid robot making a coffee.

Advancements in Robotics and Artificial Intelligence

Meanwhile, in the field of AI research, NVIDIA is making significant strides. Dr. Jim Fan, a senior research scientist at NVIDIA, is working on creating what’s known as a ‘foundation agent.’ This is a type of AI that can learn in simulated environments and then apply what it has learned to real-world situations. NVIDIA’s projects, Isaac Gym and Isaac Sim, are central to this research. They suggest a future where robots can easily adjust to different tasks and environments, making them incredibly versatile.

Here are some other articles you may find of interest on the subject of humanoid robots and robotics :

Tesla’s work with Optimus is a clear example of how robotics and AI are coming together to create a future where robots assist us with a wide range of tasks, from the mundane to the complex. The improvements seen in Optimus Gen 2 indicate that the day when robots will be a common presence in our lives is getting closer. These robots are set to become our assistants, helping us with everyday tasks and making our lives easier.

The potential investment in Figure AI, along with its partnership with BMW, shows that there is a significant commercial interest in robotics. Introducing autonomous humanoid robots into manufacturing has the potential to completely change the industry. It could make production lines more efficient and safer by reducing the need for humans to perform dangerous or monotonous tasks.

Figure AI humanoid robot

Building AI Humanoid Robots

NVIDIA’s research, under the guidance of Dr. Fan, is pushing the boundaries of what AI agents can do. By training these agents in virtual environments, they are laying the groundwork for AI that can transition smoothly from simulations to real-world applications. This research could lead to the development of robots that are not only autonomous but also adaptable, able to take on a variety of roles and adjust to different settings.

  • Robotics: The physical embodiment of the robot, including its mechanical design, sensors, actuators, and control systems.
  • Artificial Intelligence: The brain of the robot, encompassing machine learning algorithms, natural language processing, and computer vision, allowing the robot to perceive, understand, and act in its environment.

The Synergy of Robotics and AI

Combining robotics and AI is not merely about assembling parts; it’s about creating an entity that learns and adapts. Here’s how these technologies synergize:

  • Perception and Understanding: AI enables robots to interpret sensory information, making sense of their surroundings and identifying objects and people.
  • Decision-making: AI algorithms process data to make autonomous decisions, guiding the robot’s actions in real-time.
  • Learning and Adaptation: Through machine learning, robots can improve their performance based on experience, becoming more efficient and versatile over time.

Real-world Applications

Humanoid robots, equipped with AI, are not just a staple of science fiction; they’re increasingly becoming part of our reality. Some of the domains they are impacting include:

  • Healthcare: Assisting with patient care and performing repetitive tasks, freeing human staff for more complex duties.
  • Customer Service: Handling inquiries and providing assistance in banks, airports, and retail environments.
  • Education: Supporting teachers with administrative tasks and offering personalized learning experiences.

Challenges and Considerations

While the prospects are exciting, combining robotics and AI to create humanoid robots presents its own set of challenges:

  • Technical Complexity: Designing systems that can reliably interpret and interact with the unpredictable nature of the human environment is immensely challenging.
  • Ethical and Social Implications: Issues such as privacy, employment displacement, and the ethical treatment of AI entities are crucial considerations for developers and society.

In case you’re curious how these technological humanoids might affect your life, it’s worth noting that the aim is to augment human capabilities and improve quality of life. From performing hazardous tasks to providing companionship, humanoid robots have the potential to significantly impact various aspects of our daily lives.

The Future of AI humanoid Robotics

As we continue to explore the potential of humanoid robots, the journey is as much about enhancing their physical capabilities as it is about imbuing them with ethical and social understanding. The integration of robotics and AI stands as a testament to human ingenuity, promising a future where technology and humanity coexist in harmony.

The advancements made by Tesla with Optimus, the potential funding for Figure AI, and NVIDIA’s groundbreaking research are all significant steps forward in the fields of robotics and AI. These developments are pointing us toward a future where robots are an integral part of our daily lives, offering solutions to some of the most tedious and complex problems we face. As these technologies continue to progress, they promise to transform industries and improve our overall quality of life. With each new breakthrough, we get a glimpse of a future where the partnership between humans and robots becomes ever more seamless and beneficial.

At its core, the creation of humanoid robots involves intricate engineering and sophisticated AI algorithms. These robots are designed to perform tasks ranging from simple domestic chores to complex interactions in social and industrial settings. To enhance your experience of understanding this complex integration, it’s essential to grasp the primary components involved:

The journey of combining robotics and AI to create humanoid robots is a complex but thrilling endeavor that reflects the pinnacle of technological innovation. With each advancement, we edge closer to a future where these robotic counterparts not only exist among us but also contribute significantly to society. As this field continues to evolve, it promises to unveil new horizons for human achievement and interaction.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

JF35-ADN1 industrial robotics motherboards by Jetway

JF35-ADN1 industrial robotics motherboards by Jetway

Jetway has recently introduced a new motherboard, the JF35-ADN1, which is set to significantly impact the field of industrial robotics, including the technology used in Automated Guided Vehicles (AGV). This advanced motherboard is designed to support complex machine vision systems and secure payment solutions, powered by the robust Intel Processor N97 CPU. It stands out with its exceptional connectivity and display options, making it a key player in the realm of industrial automation.

At the heart of the JF35-ADN1 is the Intel Processor N97 CPU, which provides reliable performance for complex tasks. The motherboard’s 3.5″ SubCompact form factor makes it versatile for various industrial applications. It can support up to 32 GB of DDR5 system memory through SODIMM slots, offering the high bandwidth needed for challenging operations.

The motherboard is equipped with a range of high-speed interfaces, crucial for connecting cameras, sensors, and card readers that are integral to machine vision systems. It boasts two RJ-45 ports with 2.5GbE, multiple USB Type-A and Type-C ports, and eight USB 2.0 interfaces, allowing for a wide array of peripheral connections.

Industrial robotics motherboards

For payment systems where security is paramount, the JF35-ADN1 includes an optional onboard TPM 2.0 module, which secures transactions and positions it as a reliable choice for financial applications. Display capabilities are a strong suit for the JF35-ADN1, with two HDMI 2.0b ports, one DP 1.4a via USB Type-C, and an internal LVDS connector. It can support three 4K displays at 60 Hz at the same time, offering outstanding visual performance for user interfaces and monitoring systems.

JF35-ADN1 industrial robotics motherboard top view

The motherboard also caters to legacy industrial equipment with six serial connectors, including one for RS-232/422/485. It features an 8-bit GPIO, SMBus, and an audio header, adding to its adaptability. Storage options on the JF35-ADN1 are comprehensive, with 64 GB eMMC, SATA support, and an M.2 2280 M-Key for NVMe, which meet various storage requirements and ensure fast data access. The motherboard’s expansion capabilities are notable, with a Nano SIM card slot and M.2 slots for Wi-Fi 6 and 4G/5G modules, enabling mobile connectivity for remote operations.

The JF35-ADN1 motherboard is now available for order and is being produced in large quantities. Jetway’s commitment to providing advanced solutions for industrial automation is evident in this product. By combining high performance, extensive connectivity, and strong security, the JF35-ADN1 is poised to improve industrial robotics and AGV systems, driving efficiency and fostering innovation in the sector.

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

OpenAI starts investing in robotics companies

OpenAI starts investing in robotics companies

OpenAI, a leading artificial intelligence research organization, has recently expanded its reach by investing in the robotics industry. This significant move is set to enhance the capabilities of robots, enabling them to perform a wide range of tasks that could match or even surpass human performance in various environments, including both workplaces and homes.

One of the startups that has benefited from OpenAI’s investment is 1X Robotics, formerly known as Lot Robotics. The company has secured a whopping $100 million in funding, with OpenAI’s startup fund contributing $23.5 million in Series A2 funding. This financial boost has positioned 1X Robotics at the forefront of android development, focusing on creating robots that can safely work alongside humans in different sectors. Brad Lightcap from OpenAI’s startup fund has recognized the potential of 1X Robotics’ technology to change the way we think about robots in the workplace.

In Moss, Norway, 1X robotics is breaking new ground by training androids using virtual reality (VR) and embodied learning. This method is a departure from traditional programming and allows robots to learn in a way that is similar to how humans do. The result is a new breed of androids that can adapt to a variety of tasks with remarkable efficiency.

OpenAI Robotics investments

Recently, 1X has further escalated its financial backing by securing an additional $100 million in a Series B funding round, bringing its total fundraising to $125 million. While OpenAI was a key investor in the earlier round, the latest funding was led by EQT Ventures, with OpenAI taking a less prominent role.

The centerpiece of 1X’s strategy is NEO, a humanoid robot designed to address global labor shortages. NEO is conceptualized on the premise that humanoid robots are ideally suited to operate in environments designed for humans. This approach, while logical, has its critics. Some argue that such systems are overly complex, and others believe that achieving true general-purpose functionality in humanoid robots is a more distant goal than companies are suggesting.

One of the standout creations from 1X Robotics is Neo, a bipedal humanoid robot designed for the consumer market. Neo is unique because of its human-like structure and ability to learn artificially. It’s built to help with household tasks and can be controlled remotely, which means it can navigate complex situations in a home setting.

NEO Robot current specifications
1.65 meters tall
30 kilograms in weight
4 kilometers/hour walk speed
12 kilometers/hour run speed
20 kilograms carry capacity
2-4 hour run time

Bill Gates and Sam Altman interview

Generative AI, like that developed by OpenAI, is expected to play a crucial role in overcoming these challenges, potentially accelerating the development of versatile and capable humanoid robots. The substantial funding acquired by 1X is poised to advance their project, with CEO Bernt Øivind Børnich acknowledging the importance of this financial support in rewarding and motivating their dedicated team, who have been instrumental in the company’s success.

While 1X Robotics has not disclosed specific growth figures, the company has already attracted clients such as CUS Hospital and Everon, which are using androids for security and other services. The impact of OpenAI’s investment in the robotics sector is being compared to the successes of major players like Google DeepMind and Tesla. As androids become more common in different industries, we can expect to see a shift in the job market, especially for blue-collar jobs.

Sam Altman, the CEO of OpenAI, has emphasized that the key to successful robotics lies in intelligence and cognitive abilities, rather than just physical capabilities. OpenAI’s vision involves combining advanced AI models with robust robotics hardware to create androids that are not only focused on tasks but also capable of understanding and adapting to their surroundings.

OpenAI’s foray into robotics investment highlights the organization’s commitment to advancing AI and robotics. With companies like 1X Robotics leading the way, the integration of androids into our everyday lives is becoming more of a reality. As these technologies continue to develop, they have the potential to reshape the way we work, live, and interact with the world around us.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.