Categories
Life Style

‘ChatGPT for CRISPR’ creates new gene-editing tools

[ad_1]

Computer illustration showing the molecular structure of the CRISPR-Cas9 gene editing complex from Streptococcus pyogenes.

A 3D model of the CRISPR-Cas9 gene editing complex from Streptococcus pyogenes.Credit: Indigo Molecular Images/Science Photo Library

In the never-ending quest to discover previously unknown CRISPR gene-editing systems, researchers have scoured microbes in everything from hot springs and peat bogs, to poo and even yogurt. Now, thanks to advances in generative artificial intelligence (AI), they might be able to design these systems with the push of a button.

This week, researchers published details of how they used a generative AI tool called a protein language model — a neural network trained on millions of protein sequences — to design CRISPR gene-editing proteins, and were then able to show that some of these systems work as expected in the laboratory1.

And in February, another team announced that it had developed a model trained on microbial genomes, and used it to design fresh CRISPR systems, which are comprised of a DNA or RNA-cutting enzyme and RNA molecules that direct the molecular scissors as to where to cut2.

“It’s really just scratching the surface. It’s showing that it’s possible to design these complex systems with machine-learning models,” says Ali Madani, a machine-learning scientist and chief executive of the biotechnology firm Profluent, based in Berkeley, California. Madani’s team reported what it says is “the first successful editing of the human genome by proteins designed entirely with machine learning” in a 22 April preprint1 on bioRxiv.org (which hasn’t been peer-reviewed).

Alan Wong, a synthetic biologist at the University of Hong Kong, whose team has used machine learning to optimize CRISPR3, says that naturally occurring gene-editing systems have limitations in terms of the sequences that they can target and the sort of changes that they can make. For some applications, therefore, it can be a challenge to find the right CRISPR. “Expanding the repertoire of editors, using AI, could help,” he says.

Trained on genomes

Whereas chatbots such as ChatGPT are designed to handle language after being trained on existing text, the CRISPR-designing AIs were instead trained on vast troves of biological data in the form of protein or genome sequences. The goal of this ‘pre-training’ step is to imbue the models with insight into naturally occurring genetic sequences, such as which amino acids tend to go together. This information can then be applied to tasks such as the creation of totally new sequences.

Madani’s team previously used a protein language model they developed, called ProGen, to come up with new antibacterial proteins4. To devise new CRISPRs, his team retrained an updated version of ProGen with examples of millions of diverse CRISPR systems, which bacteria and other single-celled microbes called archaea use to fend off viruses.

Because CRISPR gene-editing systems comprise not only proteins, but also RNA molecules that specify their target, Madani’s team developed another AI model to design these ‘guide RNAs’.

The team then used the neural network to design millions of new CRISPR protein sequences that belong to dozens of different families of such proteins found in nature. To see whether AI-designed CRISPRs were bona fide gene editors, Madani’s team synthesized DNA sequences corresponding to more than 200 protein designs belonging to the CRISPR–Cas9 system that is now widely used in the laboratory. When they inserted these sequences — instructions for a Cas9 protein and a ‘guide RNA’ — into human cells, many of the gene editors were able to precisely cut their intended targets in the genome.

The most promising Cas9 protein — a molecule they’ve named OpenCRISPR-1 — was just as efficient at cutting targeted DNA sequences as a widely used bacterial CRISPR–Cas9 enzyme, and it made far fewer cuts in the wrong place. The researchers also used the OpenCRISPR-1 design to create a base editor — a precision gene-editing tool that changes individual DNA ‘letters’ — and found that it, too, was as efficient as other base-editing systems, as well as less prone to errors.

Another team, led by Brian Hie, a computational biologist at Stanford University in California, and by bioengineer Patrick Hsu at the Arc Institute in Palo Alto, California, used an AI model capable of generating both protein and RNA sequences. Their model, called EVO, was trained on 80,000 genomes from bacteria and archaea, as well as other microbial sequences, amounting to 300 billion DNA letters. Hie and Hsu’s team has not yet tested its designs in the lab. But predicted structures of some of the CRISPR–Cas9 systems they designed resemble those of natural proteins. Their work was described in a preprint2 posted on bioRxiv.org, and has not been peer-reviewed.

Precision medicine

“This is amazing,” says Noelia Ferruz Capapey, a computational biologist at the Molecular Biology Institute of Barcelona in Spain. She’s impressed by the fact that researchers can use the OpenCRISPR-1 molecule without restriction, unlike with some patented gene-editing tools. The ProGen2 model and ‘atlas’ of CRISPR sequences used to fine-tune it are also freely available.

The hope is that AI-designed gene-editing tools could be better suited to medical applications than are existing CRISPRs, says Madani. Profluent, he adds, is hoping to partner with companies that are developing gene-editing therapies to test AI-generated CRISPRs. “It really necessitates precision and a bespoke design. And I think that just can’t be done by copying and pasting” from naturally-occurring CRISPR systems, he says.

[ad_2]

Source Article Link

Categories
News

Google Genie AI creates interactive game worlds from images

Google Genie AI creates interactive game worlds from images

Imagine a world where the lines between reality and the digital realm blur, where you can step into a photograph and explore it as if it were a living, breathing environment. In the ever-evolving landscape of artificial intelligence, Google’s DeepMind has made a striking advancement with the creation of Genie, an AI that can generate an endless array of 2D worlds for gaming. This innovative tool is trained on a massive amount of gaming footage and uses a complex model with 11 billion parameters to understand and create new gaming environments.

Genie is not just another AI; it’s a sophisticated system that can interpret hidden actions within data, known as latent actions. This allows it to take simple images, even children’s drawings, and turn them into interactive, playable worlds. The implications of this technology are vast, with potential applications in robotics and the pursuit of artificial general intelligence.

What sets Genie apart is its use of unsupervised learning. Unlike traditional AI, which relies on clearly labeled data, Genie learns by identifying patterns and relationships on its own. This means it can process a wide range of internet videos to learn how to create games without being influenced by existing biases. This approach is key to providing a varied and engaging gaming experience.

Google Deepmind Genie world creator

Genie’s capabilities extend beyond learning. It can take images, sketches, and photos and transform them into virtual worlds that understand and replicate physical properties, such as depth. It can even learn to mimic behaviors from videos it has never seen before, showcasing its incredible adaptability.

Here are some other articles you may find of interest on the subject of generating AI artwork :

The AI’s performance is impressive. With minimal examples, Genie can replicate the gameplay of highly skilled players, a testament to its extensive parameters and the ability to scale with computational resources. Furthermore, Genie’s training includes robotics data, highlighting its potential in creating versatile AI agents.

As a foundational world model, Genie is at the forefront of AI systems that can generate and manage virtual environments. Its development marks a significant step forward in foundational world models and opens the door to more sophisticated AI applications in gaming, robotics, and beyond.

Google DeepMind’s Genie is a remarkable AI that does more than create games; it heralds a new era of artificial intelligence. Its capacity to produce an infinite number of playable 2D worlds from image prompts is a powerful demonstration of unsupervised learning’s capabilities. The progress of Genie is a clear indicator of the vast potential AI holds for transforming various industries and the exciting possibilities that lie ahead.

DeepMind’s Genie: A Leap in AI-Driven Game Creation

DeepMind’s Genie represents a significant leap in the field of artificial intelligence, particularly in the realm of game development. By harnessing the power of a neural network with 11 billion parameters, Genie can analyze and synthesize gaming environments with unprecedented complexity and variety. This neural network is a type of machine learning model designed to recognize patterns in large datasets, similar to the way the human brain operates. The sheer number of parameters indicates the model’s capacity to process and generate intricate details within the 2D worlds it creates, making each environment unique and engaging for players.

The technology behind Genie is not only about creating visually appealing worlds but also about understanding the underlying mechanics that make a game enjoyable and functional. By interpreting latent actions, which are the implicit decisions and movements within a game, Genie can construct playable worlds that respond to player interactions in a realistic and dynamic manner. This capability is crucial for developing games that are not only fun to look at but also offer a rich and interactive gaming experience.

Unsupervised Learning: The Engine Behind Genie’s Creativity

One of the most groundbreaking aspects of Genie is its reliance on unsupervised learning. This form of machine learning does not require labeled datasets, which are typically used to teach AI systems by providing examples with predefined outcomes. Instead, unsupervised learning algorithms identify patterns and relationships within the data on their own. This approach allows Genie to learn from a diverse array of internet videos, including gaming footage, without the need for explicit instructions or guidance. As a result, the AI can develop a broader understanding of game design principles and apply them in novel ways, free from the constraints of human bias.

The unsupervised learning approach is particularly advantageous for creating a wide variety of gaming experiences. Since Genie is not limited to a specific set of rules or styles, it can generate games that are not only unpredictable and original but also tailored to an extensive range of preferences and interests. This flexibility is key to keeping players engaged and ensuring that the gaming landscapes it creates are always fresh and exciting.

Implications for AI Development and Industry Applications

The development of Genie by Google DeepMind is more than just an advancement in gaming technology; it signifies a broader shift in the capabilities of AI systems. The ability to generate an endless array of 2D worlds from simple image prompts showcases the potential of AI to understand and recreate complex systems. This technology could have far-reaching implications beyond gaming, including advancements in robotics where AI agents need to navigate and interact with unpredictable environments.

Moreover, Genie’s proficiency in creating virtual worlds that accurately simulate physical properties and behaviors suggests that AI can achieve a higher level of understanding of the real world. This understanding is crucial for the development of artificial general intelligence (AGI), which aims to create AI systems that can perform any intellectual task that a human can. As foundational world models like Genie continue to evolve, they pave the way for more sophisticated AI applications that could revolutionize not only entertainment but also industries such as healthcare, transportation, and urban planning.

In summary, Google DeepMind’s Genie is a remarkable AI system that exemplifies the power of unsupervised learning and the potential for AI to innovate across various sectors. Its ability to create infinite, interactive gaming worlds from minimal input is a striking demonstration of the progress being made in artificial intelligence, and it hints at the transformative impact AI could have on our world in the years to come.

Filed Under: Gaming News, Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Stable 3D AI creates 3D models from text prompts in minutes

Stable 3D AI makes 3D models from text prompts

The ability to create 2D to images using AI has already been mastered and dominated by AI tools such as Midjourney, OpenAI’s latest DallE 3, Leonardo AI and Stable Diffusion. Now Stability AI the creators of Stable Diffusion are entering into the realm of creating 3D models from text prompts in just minutes, with the release of its new automatic 3D content creation tool in the form of Stable 3D AI. This innovative tool is designed to simplify the 3D content creation process, making the generation of concept-quality textured 3D objects more accessible than ever before.

A quick video has been created showing how simple 3D models can be created from text prompts similar to that used to create 2D AI artwork. 3D models are the next frontier for artificial intelligent and AI models to tackle. Stable 3D is an early glimpse of this transformation and is a game-changer in the realm of 3D modeling. Automating the process of creating 3D objects, a task that traditionally requires specialized skills and a significant amount of time.

Create 3D models from text prompts using AI

With Stable 3D, non-experts can create draft-quality 3D models in minutes. This is achieved by simply selecting an image or illustration, or writing a text prompt. The tool then uses this input to generate a 3D model, removing the need for manual modeling and texturing. The 3D objects created with Stable 3D are delivered in the standard “.obj” file format, a universal format compatible with most 3D software. These objects can then be further edited and enhanced using popular 3D tools such as Blender and Maya. Alternatively, they can be imported into a game engine such as Unreal Engine 5 or Unity for game development purposes.

Stable 3D not only simplifies the 3D content creation process but also makes it more affordable. The tool aims to level the playing field for independent designers, artists, and developers by empowering them to create thousands of 3D objects per day at a low cost. This could revolutionize industries such as game development, animation, and virtual reality, where the creation of 3D objects is a crucial aspect of the production process.

Other articles you may find of interest on the subject of Stability AI :

Stable 3D by Stability AI

The introduction of Stable 3D signifies a significant leap forward in 3D content creation and the ability to generate 3D models from text prompts in minutes is a testament to the advancements in artificial intelligence and its potential applications in digital content creation. We can only expect the 3D models to get even more complicated over the coming months moving from simple shapes into full complicated mesh models.

Currently, Stability AI has introduced a private preview of Stable 3D for interested parties. To request access to the Stable 3D private preview, individuals or organizations can visit the Stability AI contact page. This provides an opportunity to explore the tool’s capabilities firsthand and to understand how it can streamline the 3D content creation process.

Stable 3D is a promising tool that has the potential to revolutionize 3D content creation. By automating the generation of 3D objects and making the process accessible to non-experts, it is paving the way for a new era in digital content creation. Its compatibility with standard 3D file formats and editing tools further enhances its usability, making it a valuable asset for independent designers, artists, and developers. As Stable 3D continues to evolve, it is expected to significantly contribute to the digital content landscape.

As soon more information on the quality of the renderings and how it can be used are revealed we will keep you up to speed as always. In the meantime jump over to the official Stability AI website for more details.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.