Categories
Life Style

Powerful ‘nanopore’ DNA sequencing method tackles proteins too

[ad_1]

Two gloves hands holding a MinION portable and real time device for DNA and RNA sequencing

A nanopore sequencing device is typically used for sequencing DNA and RNA.Credit: Anthony Kwan/Bloomberg/Getty

With its fast analyses and ultra-long reads, nanopore sequencing has transformed genomics, transcriptomics and epigenomics. Now, thanks to advances in nanopore design and protein engineering, protein analysis using the technique might be catching up.

“All the pieces are there to start with to do single-molecule proteomics and identify proteins and their modifications using nanopores,” says chemical biologist Giovanni Maglia at the University of Groningen, the Netherlands. That’s not precisely sequencing, but it could help to work out which proteins are present. “There are many different ways you can identify proteins which doesn’t really require the exact identification of all 20 amino acids,” he says, referring to the usual number found in proteins.

In nanopore DNA sequencing, single-stranded DNA is driven through a protein pore by an electrical current. As a DNA residue traverses the pore, it disrupts the current to produce a characteristic signal that can be decoded into a sequence of DNA bases.

Proteins, however, are harder to crack. They cannot be consistently unfolded and moved by a voltage gradient because, unlike DNA, proteins don’t carry a uniform charge. They might also be adorned with post-translational modifications (PTMs) that alter the amino acids’ size and chemistry — and the signals that they produce. Still, researchers are making progress.

Water power

One way to push proteins through a pore is to make them hitch a ride on flowing water, like logs in a flume. Maglia and his team engineered a nanopore1 with charges positioned so that the pore could create an electro-osmotic flow that was strong enough to unfold a full-length protein and carry it through the pore. The team tested its design with a polypeptide containing negatively charged amino acids, including up to 19 in a row, says Maglia. This concentrated charge created a strong pull against the electric field, but the force of the moving water kept the protein moving in the right direction. “That was really amazing,” he says. “We really did not expect it would work so well.”

Chemists Hagan Bayley and Yujia Qing at the University of Oxford, UK, and their colleagues have also exploited electro-osmotic force, this time to distinguish between PTMs2. The team synthesized a long polypeptide with a central modification site. Addition of any of three distinct PTMs to that site changed how much the current through the pore was altered relative to the unmodified residues. The change was also characteristic of the modifying group. Initially, “we’re going for polypeptide modifications, because we think that’s where the important biology lies”, explains Qing.

And, because nanopore sequencing leaves the peptide chain intact, researchers can use it to determine which PTMs coexist in the same molecule — a detail that can be difficult to establish using proteomics methods, such as ‘bottom up’ mass spectrometry, because proteins are cut into small fragments. Bayley and Qing have used their method to scan artificial polypeptides longer than 1,000 amino acids, identifying and localizing PTMs deep in the sequence. “I think mass spec is fantastic and provides a lot of amazing information that we didn’t have 10 or 20 years ago, but what we’d like to do is make an inventory of the modifications in individual polypeptide chains,” Bayley says — that is, identifying individual protein isoforms, or ‘proteoforms’.

Molecular ratchets

Another approach to nanopore protein analysis uses molecular motors to ratchet a polypeptide through the pore one residue at a time. This can be done by attaching a polypeptide to a leader strand of DNA and using a DNA helicase enzyme to pull the molecule through. But that limits how much of the protein the method can read, says synthetic biologist Jeff Nivala at the University of Washington, Seattle. “As soon as the DNA motor would hit the protein strand, it would fall off.”

Nivala developed a different technique, using an enzyme called ClpX (see ‘Read and repeat’). In the cell, ClpX unfolds proteins for degradation; in Nivala’s method, it pulls proteins back through the pore. The protein to be sequenced is modified at either end. A negatively charged sequence at one end allows the electric field to drive the protein through the pore until it encounters a stably folded ‘blocking’ domain that is too large to pass through. ClpX then grabs that folded end and pulls the protein in the other direction, at which point the sequence is read. “Much like you would pull a rope hand over hand, the enzyme has these little hooks and it’s just dragging the protein back up through the pore,” Nivala says.

Read and repeat. Graphic showing a nanopore protein-sequencing strategy using the push and pull of an electric field through a membrane, enzyme and slip sequence.

Source: Ref. 3

Nivala’s approach has another advantage: when ClpX reaches the end of the protein, a special ‘slip sequence’ causes it to let go so that the current can pull the protein through the pore for a second time. As ClpX reels it back out again and again, the system gets multiple peeks at the same sequence, improving accuracy.

Last October3, Nivala and his colleagues showed that their method can read synthetic protein strands of hundreds of amino acids in length, as well as an 89-amino-acid piece of the protein titin. The read data not only allowed them to distinguish between sequences, but also provided unambiguous identification of amino acids in some contexts. Still, it can be difficult to deduce the amino-acid sequence of a completely unknown protein, because an amino acid’s electrical signature varies on the basis of both its surrounding sequence and its modifications. Nivala predicts that the method will have a ‘fingerprinting’ application, in which an unknown protein is matched to a database of reference nanopore signals. “We just need more data to be able to feed these machine-learning algorithms to make them robust to many different sequences,” he says.

Stefan Howorka, a chemical biologist at University College London, says that nanopore protein sequencing could boost a range of disciplines. But the technology isn’t quite ready for prime time. “A couple of very promising proof-of-concept papers have been published. That’s wonderful, but it’s not the end.” The accuracy of reads needs to improve, he says, and better methods will be needed to handle larger PTMs, such as bulky carbohydrate groups, that can impede the peptide’s movement through the pore.

How easy it will be to extend the technology to the proteome level is also unclear, he says, given the vast number and wide dynamic range of proteins in the cell. But he is optimistic. “Progress in the field is moving extremely fast.”

[ad_2]

Source Article Link

Categories
Featured

I’ve stopped lazing around in bed thanks to the 3-2-1 method – here’s how to do it

[ad_1]

I’ve never been one to leap out of bed once my alarm has gone off, preferring to sit and scroll through my phone for a few (or more) minutes. But I worry that by causing me to associate ‘being in bed’ with ‘playing on my phone’, this morning laziness is affecting how easily I can fall asleep. So when I heard about the 3-2-1 sleep method, which uses a simple countdown to get you up in the morning, of course I wanted to try it out.

To use the 3-2-1 method, all you have to do is count down from three, getting out of bed when you reach zero. The countdown acts as motivation and a timer, encouraging you to leave behind your best mattress and start the day efficiently.

[ad_2]

Source Article Link

Categories
News

New emotional AI prompting method generates improved results

New emotional AI prompting method provides improved results

It may seem strange but apparently if you apply a little emotional pressure or stimuli to AI models they will produce better results. A new research paper named “Large Language Models Understand and Can Be Enhanced by Emotional Stimuli” looks further into this unique method of using emotional stimuli with AI models. Presenting a new method for boosting the performance of Large Language Models (LLMs) by adding emotional stimuli. This technique, referred to as “emotion prompt,” has shown significant improvements in LLM performance, as demonstrated by results from the Instruction Induction dataset and the Big Bench benchmark, two respected standards in the field.

In simple terms, emotion prompts are cleverly added to the end of existing prompts. This straightforward yet powerful technique has been shown to produce high-quality responses, which humans tend to prefer. The paper’s authors have categorized emotion prompts into three psychological theories: self-monitoring, social cognitive theory, and cognitive emotion regulation. Together, these theories provide a comprehensive understanding of how emotional stimuli can be strategically used to enhance AI performance.

emotional AI prompting examples

The image illustrates the impact of emotionally charged language in prompts on the performance of various language models. It shows that adding an emotional component to the prompt (“This is very important to my career”) can improve the model’s performance in a task. This is likely due to the added urgency and specificity, which might help the model prioritize and contextualize the request more effectively.

AI Emotional Prompting explained

In each case, the emotional prompting serves to anchor the AI’s responses not just in the literal meaning of the words, but also in the emotional context and significance behind them, potentially leading to more effective and human-like interactions. Watch the video created below by the Prompt Engineering channel to learn more about the paper and this new way of using emotional pressure to improve your AI results.

Other articles you may find of interest on the subject of prompt engineering to get the best results from various AI models :

These theoretical frameworks suggest that when language models are prompted with emotional stimuli, they are potentially more effective in their tasks, possibly because the emotional context helps to align the model’s “response” with human-like empathy and understanding.

Using positive language, the paper posits that words like confidence, sure, success, and achievement could be integrated into prompts to enhance the quality of responses. For example:

  • For a productivity assistant, one could say, “I’m confident that with your assistance, we can plan this event to be a great success.”
  • In an educational setting, a prompt might include, “I’m sure that with this explanation, I’ll achieve a better understanding of the concept.”

The key is the integration of emotional cues relevant to the task at hand and the specific capabilities of the model, suggesting that larger models with more capacity may integrate these emotional stimuli more effectively into their responses.

When applying this to various tasks, one should also consider the ethical implications and the importance of maintaining sincerity and avoiding manipulation. The emotional stimuli should be used to improve engagement and understanding, not to deceive or falsely manipulate the user’s emotions.

Examples of  AI emotional prompting

  • For Clarification: “I trust you’ll provide the clarity I need to move forward with this.”
  • For Detailed Explanations: “Your thorough explanation will be a cornerstone of my understanding.”
  • For Creativity Tasks: “I’m excited to see the original ideas you’ll come up with.”
  • For Problem-Solving: “I believe in your ability to help find a great solution to this challenge.”
  • For Educational Content: “Your insight could really enhance my learning journey.”
  • For Planning: “I’m confident that with your help, we can create an effective plan.”
  • For Emotional Support: “Your understanding words could really make a difference to my day.”
  • For Encouragement: “Your encouragement would mean a lot to me as I tackle this task.”
  • For Content Creation: “I’m eager to see the engaging content we can generate together.”
  • For Decision Making: “Your guidance is crucial to making a well-informed decision.”
  • For Personal Goals: “I’m relying on your support to help me reach my goal.”
  • For Technical Support: “I trust your expertise to help resolve this technical issue.”
  • For Productivity: “Your assistance is key to making this a productive session.”
  • For Reflective Responses: “Your perspective could provide valuable insights into this matter.”

The paper also highlights the power of positive words like confidence, sure, success, and achievement when used in emotion prompts. When these words are included in the AI models’ training phase, they can significantly improve their performance. The authors suggest that combining emotion prompts from different psychological theories could potentially boost performance even more.

Cautionary Warning

However, the authors warn that the selection of emotional stimuli should be carefully considered based on the specific task. The paper notes that the effect of emotional stimuli isn’t the same across all LLMs, with larger models potentially benefiting more from emotion prompts. This suggests that the success of emotional stimuli may depend on the AI model’s complexity and capacity.

To demonstrate the practical use of emotion prompts, the paper includes an example of their use in evaluating a system by the Lama Index team. This real-world example shows how emotion prompts can be effectively used in assessing AI performance. The paper’s findings suggest that emotional stimuli can play a crucial role in improving the performance of LLMs. This discovery opens the door for new AI training techniques, with the potential to significantly enhance the performance of AI models across various applications.

The research paper “Large Language Models Understand and Can Be Enhanced by Emotional Stimuli” presents a compelling case for including emotional stimuli in AI training. The authors’ innovative “emotion prompt” approach has shown significant improvements in LLM performance, suggesting that emotional stimuli could be a valuable tool in the training and performance enhancement of AI models.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.