Categories
Politics

How the brain coordinates speaking and breathing

[ad_1]

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

[ad_2]

Source Article Link

Categories
News

Deals: 2024 Complete Presentation & Public Speaking Bundle

2024 Complete Presentation

Have you ever wished you could captivate an audience with your words? Or perhaps you’ve dreamt of delivering a presentation that leaves everyone in the room hanging on your every word? Well, your dreams are about to become a reality with the Complete Presentation & Public Speaking/Speech Course.

This comprehensive training program is designed to help you master the art of public speaking and presentation skills. Taught by Chris Haroun, an award-winning business school professor, venture capitalist, and MBA graduate from Columbia University, this course is your ticket to becoming a confident and engaging speaker.

Key Features of the Course

  • 206 lectures and 16 hours of content, accessible 24/7 for a lifetime. This means you can learn at your own pace and revisit the material whenever you need a refresher.
  • Covers all aspects of public speaking, from understanding the audience to building confidence and creating engaging content. This ensures you’re well-equipped to handle any speaking situation.
  • Includes numerous exercises, examples, and templates to assist in crafting speeches. These practical tools will help you apply what you’ve learned and create compelling presentations.
  • Suitable for both beginners and seasoned speakers. Whether you’re just starting out or looking to refine your skills, this course has something for everyone.
  • Upon completion, students receive a Certificate of Completion. This is a great addition to your resume and can help you stand out in the job market.

The course does not include any software, but access to any presentation software product such as Keynote or Microsoft PowerPoint is optional. This gives you the flexibility to use the tools you’re most comfortable with while applying the techniques you learn.

With an average rating of 4.6/5, it’s clear that this course delivers on its promise to transform your public speaking skills. But don’t just take our word for it. Join the thousands of students who have already unlocked their potential with this Complete Presentation & Public Speaking/Speech Course.

So, are you ready to step into the spotlight and let your voice be heard? Don’t wait another moment. Your journey to becoming a masterful speaker starts here.

Get this deal>

Filed Under: Deals





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Create your very own speaking AI assistant using Node.js

How to build a speaking AI assistant using Node.js ChatGPT ElevenLabs and LangChain

Interested in building your very own AI assistant complete with voice and personality using a combination of Node.js, OpenAI Whisper and ChatGPT, ElevenLabs and LangChain? This guide offers more insight into how you can get started and features a video by Developers Digest that shows you how to combine the different technologies to create a speaking AI assistant in just nine minutes using Node.js as the primary platform.

In essence, Node.js enables JavaScript to be used for server-side scripting, unifying the programming language for both client and server, and making it easier for developers to build full-stack applications.  Node.js is a runtime environment that allows you to execute JavaScript code on the server side. Unlike client-side JavaScript that runs in the browser, Node.js is built to run on various platforms like Windows, macOS, and Linux, and is commonly used for building back-end services or APIs.

Node.js is built on Google’s V8 JavaScript engine and uses an event-driven, non-blocking I/O model, making it efficient for scalable applications. It has a rich ecosystem of libraries and frameworks available through its package manager, npm (Node Package Manager), which can be used to extend its functionality.

Building a personal AI assistant using Node.js

With the right tools and a little bit of coding knowledge, you can create an assistant that can listen to your commands, understand them, and respond in a natural, human-like voice. This article will guide you through the process of setting up a voice assistant using OpenAI API, ElevenLabs, and Node.js.

ElevenLabs is a voice AI company that creates realistic, versatile, and contextually-aware AI audio. They provide the ability to generate speech in hundreds of new and existing voices in over 20 languages. OpenAI, on the other hand, is an artificial intelligence research lab that provides powerful APIs for various AI tasks, including natural language processing and understanding.

Other articles we have written that you may find of interest on the subject of AI assistants

Why build your very own AI assistant?

  • Unified Tech Stack: Node.js allows you to write server-side code in JavaScript, potentially unifying your tech stack if you’re also using JavaScript on the client side. This makes development more streamlined.
  • Cutting-Edge Technology: ChatGPT is based on one of the most advanced language models available, offering high-quality conversational capabilities. Integrating it with your assistant can provide a robust natural language interface.
  • Customization: Using ElevenLabs and LangChain, you can customize the AI’s behavior, user experience, and even the data sources it can interact with, making your personal assistant highly tailored to your needs.
  • Scalability: Node.js is known for its scalable architecture, allowing you to easily expand your assistant’s capabilities or user base without a complete overhaul.
  • Learning Opportunity: The project could serve as an excellent learning experience in fields like NLP, AI, server-side development, and UI/UX design.
  • Open Source and Community: Both Node.js and some elements of the GPT ecosystem have strong community support. You can leverage this for troubleshooting, updates, or even contributions to your project.
  • Interdisciplinary Skills: Working on such a project would require a mix of skills – from front-end and back-end development to machine learning and user experience design, offering a well-rounded experience.
  • Innovation: Given that personal AI assistants are a growing field but still relatively new, your project could contribute new ideas or approaches that haven’t been explored before.
  • Practical Utility: Finally, building your own personal assistant means you can design it to cater to your specific needs, solving problems or automating tasks in your daily life.

To create your very own speaking AI assistant, you’ll need to acquire API keys from both ElevenLabs and OpenAI. These keys can be obtained by creating accounts on both platforms and viewing the API keys in the account settings. Once you have these keys, you can start setting up your voice assistant.

Creating a personal AI assistant capable of speech

The first step in creating your very own speaking AI assistant is to establish a new project directory. This directory will contain all the files and code necessary for your assistant. Within this directory, you’ll need to create an environment file (EnV) for your API keys. This file will store your keys securely and make them accessible to your code. Next, you’ll need to create an index file and an ‘audio’ directory. The index file will contain the main code for your assistant, while the ‘audio’ directory will store the audio files generated by your assistant.

Node.js

Once your directory structure is set up, you’ll need to install the necessary packages. These packages will provide the functionality needed for your assistant to listen for commands, understand them, and generate responses. You can install these packages using Node.js, a popular server-side scripting language that allows JavaScript to be used for server-side scripting. After installing the necessary packages, you’ll need to import them into your index file. This will make the functionality provided by these packages available to your code.

ChatGPT

With your packages imported, you can start setting up the OpenAI ChatGPT instance and keyword detection. The ChatGPT instance will handle the natural language processing and understanding, while the keyword detection will allow your assistant to listen for specific commands. Next, you’ll need to initiate and manage the recording process. This process will capture the audio commands given to your assistant and save them as audio files in your ‘audio’ directory.

OpenAI Whisper

Once your audio commands are saved, they can be transcribed using the whisper transcription from OpenAI. This transcription will convert the audio commands into text, which can then be understood by your assistant. With your commands transcribed, your assistant can check for keywords and wait for a response from the OpenAI Language Model (LLM). The LLM will analyze the commands and generate a text response. This text response can then be converted to audio using ElevenLabs’ AI audio generation capabilities. The audio response will be saved in your ‘audio’ directory and can be played out to the user.

Finally, you can customize your assistant to perform certain actions or connect to the internet for further functionality. Creating your very own speaking AI assistant is a fascinating project that can be accomplished with a few tools and some coding knowledge. With ElevenLabs and OpenAI, you can create an assistant that can listen, understand, and respond in a natural, human-like voice.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.