Categories
Life Style

Memories are made by breaking DNA — and fixing it

[ad_1]

When a long-term memory forms, some brain cells experience a rush of electrical activity so strong that it snaps their DNA. Then, an inflammatory response kicks in, repairing this damage and helping to cement the memory, a study in mice shows.

The findings, published on 27 March in Nature1, are “extremely exciting”, says Li-Huei Tsai, a neurobiologist at the Massachusetts Institute of Technology in Cambridge who was not involved in the work. They contribute to the picture that forming memories is a “risky business”, she says. Normally, breaks in both strands of the double helix DNA molecule are associated with diseases including cancer. But in this case, the DNA damage-and-repair cycle offers one explanation for how memories might form and last.

It also suggests a tantalizing possibility: this cycle might be faulty in people with neurodegenerative diseases such as Alzheimer’s, causing a build-up of errors in a neuron’s DNA, says study co-author Jelena Radulovic, a neuroscientist at the Albert Einstein College of Medicine in New York City.

Inflammatory response

This isn’t the first time that DNA damage has been associated with memory. In 2021, Tsai and her colleagues showed that double-stranded DNA breaks are widespread in the brain, and linked them with learning2.

To better understand the part these DNA breaks play in memory formation, Radulovic and her colleagues trained mice to associate a small electrical shock with a new environment, so that when the animals were once again put into that environment, they would ‘remember’ the experience and show signs of fear, such as freezing in place. Then the researchers examined gene activity in neurons in a brain area key to memory — the hippocampus. They found that some genes responsible for inflammation were active in a set of neurons four days after training. Three weeks after training, the same genes were much less active.

The team pinpointed the cause of the inflammation: a protein called TLR9, which triggers an immune response to DNA fragments floating around the insides of cells. This inflammatory response is similar to one that immune cells use when they defend against genetic material from invading pathogens, Radulovic says. However, in this case, the nerve cells were responding not to invaders, but to their own DNA, the researchers found.

TLR9 was most active in a subset of hippocampal neurons in which DNA breaks resisted repair. In these cells, DNA repair machinery accumulated in an organelle called the centrosome, which is often associated with cell division and differentiation. However, mature neurons don’t divide, Radulovic says, so it is surprising to see centrosomes participating in DNA repair. She wonders whether memories form through a mechanism that is similar to how immune cells become attuned to foreign substances that they encounter. In other words, during damage-and-repair cycles, neurons might encode information about the memory-formation event that triggered the DNA breaks, she says.

When the researchers deleted the gene encoding the TLR9 protein from mice, the animals had trouble recalling long-term memories about their training: they froze much less often when placed into the environment where they had previously been shocked than did mice that had the gene intact. These findings suggest that “we are using our own DNA as a signalling system” to “retain information over a long time”, Radulovic says.

Fitting in

How the team’s findings fit with other discoveries about memory formation is still unclear. For instance, researchers have shown that a subset of hippocampal neurons known as an engram are key to memory formation3. These cells can be thought of as a physical trace of a single memory, and they express certain genes after a learning event. But the group of neurons in which Radulovic and her colleagues observed the memory-related inflammation are mostly different from the engram neurons, the authors say.

Tomás Ryan, an engram neuroscientist at Trinity College Dublin, says the study provides “the best evidence so far that DNA repair is important for memory”. But he questions whether the neurons encode something distinct from the engram — instead, he says, the DNA damage and repair could be a consequence of engram creation. “Forming an engram is a high-impact event; you have to do a lot of housekeeping after,” he says.

Tsai hopes that future research will address how the double-stranded DNA breaks happen and whether they occur in other brain regions, too.

Clara Ortega de San Luis, a neuroscientist who works with Ryan at Trinity College Dublin, says that these results bring much-needed attention to mechanisms of memory formation and persistence inside cells. “We know a lot about connectivity” between neurons “and neural plasticity, but not nearly as much about what happens inside neurons”, she says.

[ad_2]

Source Article Link

Categories
Life Style

Memories from when you were a baby might not be gone

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

Brown Skua, Stercorarius antarcticus, calling in front of a King Penguin colony.

Avian flu has been detected sub-Antarctic king penguins.Credit: Education Images/Universal Images Group via Getty

Some researchers in Antarctica are halting work after the global spread of deadly H5N1 avian influenza finally reached the continent. Bird flu was detected on the Antarctic mainland for the first time last month, in dead skuas (Stercorarius antarcticus). Spanish and Argentine research projects into vulnerable birds, seals and penguins have been suspended to reduce the risk of researchers spreading infection — or becoming infected themselves.

Nature | 4 min read

The most comprehensive report to date of compounds in plastic has found a laundry list of hazardous ingredients. Of more than 16,000 chemicals found in plastics or thought to be used in them, at least 4,200 are “persistent, bioaccumulative, mobile and/or toxic”, according to a group funded by the Norwegian Research Council. For more than 10,000 chemicals no hazard data were available, and for more than 9,000 there was no publicly available information about which plastics they are used in. The report’s authors argue for a ‘red list’ of 3,600 concerning compounds that should be regulated.

Nature | 5 min read

Reference: PlastChem Project report

Patients with a deadly type of brain cancer called glioblastoma saw their tumours shrink following CAR-T therapy, a treatment based on modifying a patient’s own immune cells to target proteins in the cancer. These are early results from two small studies, and in many cases the tumours grew back, but it suggests the treatment has promise. The goal now is to generate longer-lasting responses. “It lends credence to the potential power of CAR-T cells to make a difference in solid tumours, especially the brain,” says neurosurgeon Bryan Choi, lead author of one of the studies. CAR T cells are currently only approved for treating blood cancers, such as leukaemia.

Nature | 4 min read

References: New England Journal of Medicine paper and Nature Medicine paper

The US has approved the first drug to treat an obesity-linked liver disease that affects an estimated 5% of the world’s adults. Resmetirom, to be marketed as Rezdiffra, treats metabolic dysfunction-associated steatohepatitis (MASH) — formerly known as non-alcoholic steatohepatitis (NASH). After many earlier drug failures, resmetirom is the first to reduce scar tissue known as fibrosis in the liver. But researchers caution that evidence for long-term benefits is still needed. “Only time will tell,” says gastroenterologist Maya Balakrishnan. “In the end, what matters is: does this drug improve survival?”

Nature | 4 min read

A ‘hurrah moment’: go deeper into the development and approval of resmetirom in Nature Reviews Drug Discovery (10 min read)

Features & opinion

People have no memories from before about three years old, and no one knows why. “It’s a paradox in a sense,” says neuroscientist Flavio Donato. “In the moment that the brain is learning at a rate it will never show again during the whole lifetime, those memories seem not to stick in the brain.” New research suggests that maybe those memories aren’t gone after all — we just can’t consciously access them. Scientists are swapping lab rats and mazes for playrooms and plush toys to reveal what’s going on inside tiny tots’ heads.

Science | 12 min read

A trio of experienced scientists has put together a project-prioritizing checklist to help early-career researchers from being pulled in too many directions. They suggest rating each project on a scale of 1 (strongly disagree) to 5 (strongly agree) on the following points:

The project is with people I trust to be good scientists

I look forward to meetings with my project collaborators

The topic of the project is interesting to me

The project fits with my desired professional identity

Data collection for the project is going well

The results seem to be robust

Disregard any projects that score a 1 in any category and charge ahead with those with the highest score.

Nature | 5 min read

In Journeys of Black Mathematicians, film maker George Csicsery reveals how Black scholars shaped today’s US mathematics community and provides hope for the future. “It is wonderful to learn about successes in academia and industry,” writes Black mathematician Noelle Sawyer in her review. “The question that needs to be asked now is which spaces are worth entering.” Furthering representation should not mean doing morally questionable work, such as creating weapons, argues Sawyer. “Pushing back against the inequities of the past and present should not include contributing to the oppression of others.”

Nature | 6 min read

Watch Journeys of Black Mathematicians online

Feeling scared or overwhelmed about the future of our warming planet is now part of the human condition, says atmospheric scientist Adam Sobel. The greatest harm of climate change, Sobel says, comes from its role as a ‘threat multiplier’ — for example, contributing to democratic backsliding. “The important thing is to remain engaged,” he says, for example by voting for politicians who push forward the clean-energy transition. Scientists can also orient their research more towards supporting climate-adaptation planning. “Maybe a more pragmatic and constructive question than ‘how doomed are we?’ is ‘what should we do about it?’”

Nature | 10 min read

Andrew Robinson’s pick of the top five science books to read this week includes a fascinating account of ophthalmology and life with vision impairment and a witty cogitation on how robots learnt languages.

Nature | 4 min read

Infographic of the week

Annual review. A stacked percentage bar chart showing the breakdown of productive hours spent on areas such as teaching and research.

Throughout her first year on the tenure track, psychologist Megan Rogers tracked all of her productive activities in 30-minute increments. Her key takeaways were that working more than 45 hours a week was unsustainable, tasks often took longer than expected, having a non-working life didn’t make her less productive and it’s OK for focus to ebb and flow over time. If you want to try out time tracking, you can download Rogers’ Microsoft Excel template. (Nature | 6 min read)

Quote of the day

Neuroscientist Susan Rogers, who started off her career as Prince’s audio engineer, says that musicians and scientists have more in common than one might guess — both need to be open-minded and to be able to separate relevant and irrelevant information. (Nature | 10 min read)

[ad_2]

Source Article Link

Categories
News

How to fine tune large language models (LLMs) with memories

How to fine tune LLMs with memories

If you would like to learn more about how to fine tune AI language models (LLMs) to improve their ability to memorize and recall information from a specific dataset. You might be interested to know that the AI fine tuning process involves creating a synthetic question and answer dataset from the original content, which is then used to train the model.

This approach is designed to overcome the limitations of language models that typically struggle with memorization due to the way they are trained on large, diverse datasets. To explain the process in more detail Trelis Research has created an interesting guide and overview on how you can find tune large language models for memorization.

Imagine you’re working with a language model, a type of artificial intelligence that processes and generates human-like text. You want it to remember and recall information better, right? Well, there’s a way to make that happen, and it’s called fine-tuning. This method tweaks the model to make it more efficient at holding onto details, which is especially useful for tasks that need precision.

Language models are smart, but they have a hard time keeping track of specific information. This problem, known as the “reversal curse,” happens because these models are trained on huge amounts of varied data, which can overwhelm their memory. To fix this, you need to teach the model to focus on what’s important.

Giving LLMs memory by fine tuning

One effective way to do this is by creating a custom dataset that’s designed to improve memory. You can take a document and turn it into a set of questions and answers. When you train your model with this kind of data, it gets better at remembering because it’s practicing with information that’s relevant to what you need.

Now, fine-tuning isn’t just about the data; it’s also about adjusting certain settings, known as hyperparameters. These include things like how much data the model sees at once (batch size), how quickly it learns (learning rate), and how many times it goes through the training data (epoch count). Tweaking these settings can make a big difference in how well your model remembers.

Here are some other articles you may find of interest on the subject of large language models and fine-tuning :

Fine tuning large language models

Choosing the right model to fine-tune is another crucial step. You want to start with a model that’s already performing well before you make any changes. This way, you’re more likely to see improvements after fine-tuning. For fine-tuning to work smoothly, you need some serious computing power. That’s where a Graphics Processing Unit (GPU) comes in. These devices are made for handling the intense calculations that come with training language models, so they’re perfect for the job.

Once you’ve fine-tuned your model, you need to check how well it’s doing. You do this by comparing its performance before and after you made the changes. This tells you whether your fine-tuning was successful and helps you understand what worked and what didn’t. Fine-tuning is a bit of an experiment. You’ll need to play around with different hyperparameters and try out various models to see what combination gives you the best results. It’s a process of trial and error, but it’s worth it when you find the right setup.

To really know if your fine-tuned model is up to par, you should compare it to some of the top models out there, like GPT-3.5 or GPT-4. This benchmarking shows you how your model stacks up and where it might need some more work.

So, if you’re looking to enhance a language model’s memory for your specific needs, fine-tuning is the way to go. With a specialized dataset, the right hyperparameter adjustments, a suitable model, and the power of a GPU, you can significantly improve your model’s ability to remember and recall information. And by evaluating its performance and benchmarking it against the best, you’ll be able to ensure that your language model is as sharp as it can be.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Giving AI memories with Sparse Priming Representation (SPR)

Giving AI memories with Sparse Priming Representation SPR

If you’ve ever marveled at the human brain’s remarkable ability to store and recall information, you’ll be pleased to know that researchers are hard at work trying to imbue artificial intelligence with similar capabilities. Enter Sparse Priming Representation (SPR), a cutting-edge technique designed to make AI’s memory storage and retrieval as efficient as ours. In this comprehensive guide, we’ll delve deep into the world of SPR and how it could be a game-changer for the future of AI.

What is Sparse Priming Representation (SPR)?

To put it simply, SPR is a memory organization method that seeks to emulate how human memory works. This technology distills complex thoughts, ideas, and knowledge into concise, context-driven lists of statements. By doing so, it allows machines, as well as human experts, to grasp and recall these complex ideas quickly and efficiently.

Here are a couple of main features

  • Minimalistic Representation: Stores complex ideas using minimal keywords or phrases.
  • Context Preservation: Maintains the surrounding context for accurate reconstruction.
  • Quick Retrieval: Facilitates rapid recall of stored information.

If you’re familiar with terms like “data overload” and “information glut,” you’ll understand the pressing need for efficient memory systems in AI. As machine learning models grow larger and more sophisticated, so does the volume of data they have to process and remember. This is where SPR comes in to save the day. Applications of SPR include:

  • Artificial Intelligence: Enhances memory organization in Large Language Models (LLMs).
  • Information Management: Simplifies the categorization and retrieval of data.
  • Education: Helps students and professionals understand and retain complex subjects.

What is Data Overload?

We live in a world where tons of data are created every day, from tweets to weather updates. For AI, data overload happens when there’s too much information coming in to handle properly. Think of it like trying to find a book in a messy library; the more books there are on the floor, the harder it is to find the one you need.

What is Information Glut?

This term is about having so much information that it becomes hard to know what really matters. It’s like getting a bunch of notifications on your phone, but only one or two are actually important, like a message from your boss. The rest are just distractions.

This is where Sparse Priming Representation (SPR) comes in. SPR helps AI sort through all that data and focus on what’s important. It’s like having a few key books in the messy library tagged, so you can find what you’re looking for easily. This doesn’t just make AI faster; it makes it better at the jobs it’s supposed to do.

Other articles we have written that you may find of interest on the subject of tuning AI models for greater efficiency :

AI training

In case you’re curious how SPR fits into the bigger picture of AI training, let’s briefly discuss the existing methods:

  1. Initial Bulk Training: Ludicrously expensive and often impractical.
  2. Fine-tuning: Limited utility for knowledge retrieval.
  3. Online Learning: Commercial viability is still in question.
  4. In-context Learning: The most viable current solution.

SPR’s major contribution lies in its token-efficiency, which optimizes memory organization. This becomes invaluable, especially when we deal with constraints like the context window in Retrieval-Augmented Generation (RAG) systems. Simply put, SPR can be the ultimate way to teach LLMs how to better remember and apply information.

Most people underestimate the power of the latent space in AI models. SPR capitalizes on this underutilized feature, enabling what is known as associative learning. With just a few keywords or statements, SPR can “prime” an AI model to understand complex ideas—even those that were outside its original training data. So if you’re struggling to make your AI model understand concepts like “Heuristic Imperatives” or the “ACE Framework,” SPR could be the secret sauce you’ve been missing.

designed to make AI memory storage and retrieval as efficient as ours

Sparse Priming Representation (SPR) benefits and features

SPR is a technique for organizing memory that mimics the structure and recall patterns observed in human memory.

Objective: To distill complex ideas, memories, or concepts into minimal sets of keywords, phrases, or statements for efficient storage and retrieval.

Applicability: Used by subject matter experts and large language models (LLMs) to reconstruct complex concepts quickly.

  • Human Memory Efficiency:
    • Stores information in compressed, contextually relevant forms.
    • Utilizes sparse, interconnected representations for quick recall and synthesis of new ideas.
  • SPR Methodology:
    • Focuses on reducing information to its most essential elements.
    • Retains the context necessary for accurate reconstruction using short, complete sentences.
  • Practical Applications:
    • Domains include artificial intelligence, information management, and education.
    • Can improve LLM performance, optimize memory organization, and facilitate effective learning and communication tools.
  • Limitations in Teaching LLMs:
    • Initial bulk training: Expensive.
    • Fine-tuning: May not be useful for knowledge retrieval.
    • Online Learning: Uncertain commercial viability.
    • In-context Learning: Currently the only viable method.
  • Current Trends:
    • Retrieval Augmented Generation (RAG) is popular, using vector databases and Knowledge Graphs (KGs).
    • Common question: “How to overcome context window limitations?” Short answer: you generally can’t.
  • Role of Latent Space:
    • LLMs possess a unique capability similar to human associative learning.
    • Can be “primed” to think in a certain way or to understand complex, novel ideas outside their training distribution.
  • Token-Efficiency with SPR:
    • SPRs are used to convey complex concepts efficiently for in-context learning.
    • Stored as metadata in Knowledge Graph nodes and fed to the LLM at inference, bypassing the need for raw, human-readable data.

As we continue to push the boundaries of what AI can achieve, it’s techniques like SPR that take us closer to creating machines that can think and learn more like humans. Whether you’re a researcher, a student, or simply an AI enthusiast, understanding the potential of SPR could significantly enhance your experience with this revolutionary technology.

In the rapidly evolving landscape of AI, the promise of SPR as a human-like approach to memory storage and retrieval is not just exciting—it truly is revolutionary. It stands as a bridge between the worlds of human cognition and machine intelligence, ensuring that as our computers grow smarter, they also grow more efficient and relatable. To learn more about SPR jump over to the official GitHub repository more details.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Creating Memories: Why Melbourne’s Function & Event Venues Are a Must-Try

When creating unforgettable memories, the choice of venue plays a crucial role. Whether you’re planning a wedding, a corporate event, a milestone birthday celebration, or any other special occasion, Melbourne’s function and event venues offer a diverse and vibrant backdrop that can transform any gathering into a remarkable experience. 

In this blog post, we’ll explore why Melbourne’s event venues like Harbour Kitchen are a must-try, highlighting their unique features, cultural richness, and the endless possibilities they offer to make your event special.

1. Diversity of Venues

Melbourne is a city that prides itself on its diverse cultural heritage, and this diversity is beautifully reflected in its event venues. From historic mansions and lush gardens to modern, state-of-the-art conference centers, you’ll find various venues that cater to multiple tastes and preferences. This diversity ensures that you can find the perfect setting to match the theme and mood of your event, whether it’s an intimate family gathering or a grand corporate gala.

2. Iconic Landmarks

Melbourne boasts a plethora of iconic landmarks that can serve as a stunning backdrop for your event. Imagine hosting a cocktail party overlooking the stunning Yarra River or saying “I do” with the iconic Flinders Street Station in the background. These landmarks add a touch of magic to your event and make it easy for your guests to find and navigate the venue.

3. Culinary Excellence

Melbourne is renowned for its vibrant food scene, and its event venues are no exception. Many venues collaborate with top-tier chefs and caterers to provide an exceptional dining experience for your guests. Whether you’re planning a formal dinner, a casual brunch, or an international buffet, you can expect a culinary journey that will leave a lasting impression on your guests’ taste buds.

4. Cultural Enrichment

Melbourne is often referred to as Australia’s cultural capital, and this reputation extends to its event venues. You can host your event in venues steeped in history and culture, such as the iconic Melbourne Town Hall or the Royal Exhibition Building. These venues offer a sense of grandeur and an opportunity to immerse your guests in Melbourne’s rich cultural heritage.

5. Stunning Natural Beauty

For those who appreciate the beauty of nature, Melbourne’s function and event venues have you covered. The city has numerous parks, gardens, and waterfront venues that offer breathtaking views and a tranquil atmosphere. Whether you prefer a garden wedding, a riverside ceremony, or a rooftop party with panoramic city views, Melbourne’s natural beauty can elevate your event to the next level.

6. Professional Event Services

Melbourne’s event venues are well-equipped with experienced event professionals who can help you plan and execute your event seamlessly. From event coordinators and decorators to audiovisual technicians and security personnel, you’ll have access to a dedicated team of experts committed to making your event successful.

7. Accessibility

Melbourne’s function and event venues are conveniently located and easily accessible. The city’s well-connected public transportation system ensures that your guests can easily reach the venue, reducing logistical challenges and making it a stress-free experience for everyone.

8. Year-Round Appeal

One of Melbourne’s most significant advantages as an event destination is its year-round appeal. Regardless of the season, you’ll find venues to accommodate your event needs. From cozy winter weddings in historic venues to summer rooftop parties, Melbourne’s climate and venue options ensure you can plan your event anytime.

9. Creativity and Customization

Melbourne’s event venues offer a blank canvas for your creativity. Whether you have a specific theme or want to create a unique experience, these venues provide the flexibility to customize every event detail. You can bring your vision to life, from lighting and decor to entertainment and activities.

10. Unforgettable Memories

Ultimately, what truly matters is the memories you create at your event. Melbourne’s function and event venues provide the perfect backdrop for those memorable moments, whether it’s the joyous laughter of friends and family, the heartfelt exchange of vows, or the successful conclusion of a corporate conference. These venues can turn ordinary gatherings into extraordinary memories that will be cherished forever.

In conclusion, Melbourne’s function and event venues are a must-try for anyone looking to create unforgettable memories. With their diversity, cultural richness, stunning settings, and professional services, these venues offer the perfect recipe for successful and memorable events. So, whether you’re planning a wedding, a corporate event, or a special celebration, consider Melbourns your destination for crafting memories that will last a lifetime. Melbourne’s venues are not just places but the stages where your memories come to life.