Categories
News

Porsche Turbo models to get new look

Porsche Turbo models

Porsche has announced that it is planning to give its Turbo models a new look to further differentiate these models from other cars in their lineup. This will start with the new Porsche Panamera which is going to be made official on the 24th of November.

“In 1974, we presented the first turbocharged 911. Since then, the Turbo has become a synonym for our high-performance top models and is now more or less a brand of its own. We now want to make the Turbo even more visible, and differentiate it more markedly from other derivatives such as the GTS,” explains Michael Mauer, Vice President Style Porsche. “This is why we’ve developed a distinctive Turbo aesthetic. From now on, the Turbo versions will exhibit a consistent appearance across all model series – one that is elegant, high-quality and very special.”

The new Turbonite metallic tone is exclusively reserved for the Turbo models. Like all of our paints, this one was very carefully composed by the Porsche Colour & Trim experts. Gold elements create an elegant, metallising effect, with the top layer in a contrasting satin finish. The lettering on the rear and the Daylight Opening (DLO), as well as the borders of the side windows, will be given a Turbonite finish in the Turbo models in future. Depending on the model series, further details such as the inlays in the front aprons, the spokes, or the aeroblades in the light alloy wheels could feature Turbonite paintwork.

You can find out more details about what Porsche has planned for its new Turbo models over at the Porsche website at the link below, as soon as we get some more details on what they have planned, we will let you know.

Source Porsche

Filed Under: Auto News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How foundation models are changing the world of AI

artificial intelligent robotic hands touching a human hand due to the creation of foundation models

Artificial intelligence is becoming a part of our daily lives faster than anyone thought possible. It’s changing the way we live in many ways, every day, week, and month, as companies introduce new innovations. They are competing to create the most advanced AI tools and services. In this competition, foundation models have become key. These are much more than typical machine learning tools. They’re huge in the world of technology, trained with huge and diverse amounts of data. Their impact on AI is huge, completely changing how we see and understand the field.

What are AI foundation models?

Think of foundation models as the robust scaffolding upon which modern AI is constructed. Their training is extensive, covering a broad spectrum of data, which empowers them to decipher complex patterns and connections that were previously out of reach. This is not a simple training process but a thorough and diverse one, preparing these models to be customized for specific needs. The effectiveness of this method is evident in the leaps AI has made recently, pushing the envelope of what we believed possible.

  • Large-Scale Training: Trained on vast, diverse datasets.
  • Versatile Foundation: Serves as a base for building specialized AI systems.
  • Extensive Pre-Training: Undergoes rigorous pre-training on a wide range of tasks.
  • Fine-Tuning Capability: Can be customized for specific applications.
  • Efficiency in Development: Reduces the need to create new models for each task.
  • Broad Applicability: Useful in various industries like healthcare, finance, and transportation.

Other articles you may find of interest on the subject of AI foundation models :

A Paradigm Shift in AI

The advent of foundation models has indeed revolutionized the field of AI, altering the traditional approach of model development. Here’s an expanded view of this transformation:

  • The Traditional Approach: Previously, AI development predominantly focused on creating specific models tailored for individual tasks. This approach, while effective for targeted applications, had its drawbacks. Each new task required starting from the ground up, developing a model from scratch. This process was not only time-consuming but also demanded significant computational resources and expertise. It often resulted in a siloed development environment where the progress in one task didn’t necessarily translate to others.
  • The Emergence of Foundation Models: Foundation models have shifted this paradigm. Unlike their predecessors, these models are not designed for a single, specific purpose. Instead, they are trained on enormous and diverse datasets, covering a wide array of information and tasks. This extensive pre-training equips them with a broad understanding and adaptability, making them a versatile tool in the AI arsenal.
  • Broad Pre-Training and Fine-Tuning Abilities: The real power of foundation models lies in their ability to be fine-tuned. After the initial, extensive pre-training, these models can be adapted to specific tasks with relatively minimal additional training. This is a stark contrast to the traditional method, where each new task might require building an entirely new model.
  • Efficiency and Resource Utilization: The efficiency gained through this approach is twofold. Firstly, it significantly reduces the time and resources required to develop AI solutions. Developers can now take a pre-trained foundation model and tailor it to their needs, bypassing the lengthy and resource-intensive process of training a model from zero. Secondly, it optimizes computational resources, as the same foundational model can be reused across multiple applications.
  • Democratization of AI: Perhaps one of the most impactful aspects of foundation models is their role in democratizing AI. Their adaptability and efficiency make advanced AI technologies accessible to a broader range of users and developers, including those with limited resources. Smaller organizations, startups, and even individual researchers can leverage these powerful models, leveling the playing field in AI development and innovation.

The rise of foundation models represents a fundamental shift in how AI systems are developed and applied. This shift not only enhances efficiency and resource utilization but also broadens the scope of AI, making cutting-edge technologies more accessible and equitable.

The Wide-Reaching Impact of Foundation Models

The impact of foundation models in AI transcends the realms of efficiency and resource management, heralding new capabilities that were once thought to be exclusively within the realm of human intelligence.

  • Understanding and Generating Human Language: Foundation models have significantly advanced the field of natural language processing (NLP). They are capable of understanding nuances, contexts, and even subtleties in human language, a feat that was once challenging for AI. These models can generate coherent, contextually relevant, and sometimes even creative textual content. This ability has applications in a wide range of areas, from automated customer service and chatbots to content creation and language translation services.
  • Recognizing Complex Images: In the realm of computer vision, foundation models have made strides in enabling machines to recognize and interpret complex visual data. They can identify objects, scenes, and activities in images and videos with a high degree of accuracy. This capability is crucial in various applications, such as medical imaging for disease diagnosis, autonomous vehicle technology, and surveillance systems. The sophistication of these models in image recognition mirrors human-like understanding, allowing for more nuanced and accurate interpretations.
  • Mastering Intricate Games: Foundation models have demonstrated their prowess by mastering complex games, which require strategic thinking, planning, and decision-making skills akin to human players. Games like chess, Go, and various strategy video games, traditionally requiring deep cognitive abilities, are now arenas where AI can perform at or above the level of the best human players. This achievement not only showcases the advanced computational and strategic capabilities of these models but also provides insights into how AI can handle complex, multi-layered decision-making scenarios in real-world applications.
  • Beyond Traditional AI Boundaries: These advancements mark a significant departure from the earlier limitations of AI. Foundation models have pushed the boundaries, venturing into areas that require a level of understanding, reasoning, and learning that was previously considered exclusive to humans. This shift is not just about performing tasks; it’s about imbuing AI systems with a level of cognition and adaptability that closely mirrors human intelligence.
  • Implications and Potential: The abilities of foundation models open up a plethora of possibilities across various sectors. In healthcare, they can aid in diagnostic procedures and patient care management. In the automotive industry, they contribute to the development of more sophisticated autonomous driving systems. In entertainment and arts, they assist in creating complex, dynamic content. The list of applications is ever-growing, indicating a future where AI’s role is integral and pervasive in solving some of the most intricate challenges and tasks.

AI foundation models are not just enhancing the efficiency of AI systems; they are redefining what AI can achieve. By mastering language, visual understanding, and complex problem-solving, these models are bridging the gap between artificial and human intelligence, opening up unprecedented possibilities across a myriad of industries and applications.

Transforming Industries with Foundation Models

The influence of foundation models is far-reaching, creating a ripple effect that is transforming multiple industries in significant ways.

  • Healthcare: In the healthcare industry, foundation models are revolutionizing both diagnostics and treatment planning. For instance, in medical imaging, AI can now accurately interpret X-rays, MRIs, and CT scans, often identifying nuances that might be missed by the human eye. This capability enhances diagnostic accuracy and speeds up the process, leading to quicker and more effective patient care. Additionally, AI-driven predictive models are being used to forecast patient outcomes, personalize treatment plans, and even assist in drug discovery and development.
  • Finance: The financial sector is leveraging foundation models for a range of applications, from fraud detection to personalized financial advice. AI algorithms can analyze vast amounts of financial data at an unprecedented speed, identifying patterns indicative of fraudulent activity. This helps in mitigating risks and protecting consumers. Moreover, AI is being used to tailor financial products and services to individual customers, enhancing customer experience and satisfaction.
  • Entertainment: In the world of entertainment, these models are transforming content creation and recommendation systems. AI algorithms can analyze user preferences and viewing habits to recommend personalized content, enhancing user engagement. Furthermore, AI is being used in the creation of realistic visual effects and even generating new content, such as music, art, and literature, opening new avenues for creative expression.
  • Transportation: The transportation sector is seeing a significant impact, especially in the development of autonomous vehicle technology. Foundation models are key in processing and interpreting the vast array of sensory data required for self-driving cars, from recognizing traffic signals and obstacles to making real-time navigation decisions. This advancement not only holds the promise of reducing traffic accidents but also aims to revolutionize the way we commute.
  • Accelerated AI Research and Development: Beyond these industry-specific applications, foundation models are fueling a rapid acceleration in AI research and development as a whole. Breakthroughs in natural language processing (NLP) have led to more sophisticated voice assistants and translation services. In computer vision, advancements have improved object recognition and scene interpretation. Reinforcement learning, powered by foundation models, is enabling AI systems to learn and adapt from their environment, making decisions based on complex datasets and simulations.
  • Broadening the Scope of AI: These developments are broadening the scope and capabilities of AI, enabling it to tackle more complex, multifaceted problems. AI is no longer confined to narrow, specific tasks but is increasingly capable of handling tasks that require a degree of understanding, reasoning, and learning that was once thought to be the exclusive domain of humans.

Foundation models are more than just a step forward in AI; they represent a paradigm shift. They have redefined the development and application of AI systems, leading to impressive advancements in capabilities. As they continue to evolve, they promise to further reshape the landscape of AI, unlocking new potential and opportunities. With foundation models, the future of AI looks not only bright but boundless.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to customize the new GPT models

Embracing the New Era of Custom GPTs

The advent of Custom GPTs marks a pivotal moment in the evolution of AI technology, ushering in an age where users have the power to sculpt AI systems to fit their distinct needs and preferences. This significant transition from broadly applied, generic AI models to highly personalized versions signifies a fundamental change in our interaction with technology. By shifting towards these customized models, we can enhance their relevance, efficiency, and effectiveness across a spectrum of fields, thus revolutionizing how technology integrates into our daily lives and professional environments.

he Art of Creating Your Personalized GPT

Embarking on the journey to create your own GPT begins by visiting a specific URL provided by OpenAI. At this juncture, you are greeted by an intuitive interface designed to simplify the process of model selection. This platform facilitates the selection of various GPT models, each offering unique features, which you can then fine-tune to align with your specific objectives. It’s a process that emphasizes user empowerment and flexibility, allowing for a level of customization that aligns the AI’s capabilities with your individual goals and requirements.

Personalizing Your GPT: A Detailed Process

The customization journey takes on a more personal touch as you start by assigning a name to your GPT, such as ‘mvin PR’ for a GPT designed to act as a math tutor. This initial step is followed by setting a distinctive profile picture and defining the specific role of your GPT, thus enriching the user experience and fostering a more engaging interaction. These seemingly small yet impactful customizations add layers of personality and functionality to the GPT, transforming it into a more relatable and effective tool.

Uncovering the Interactive Potential of Custom GPTs

Perhaps the most striking feature of these tailor-made GPTs is their ability to interact dynamically. This is exemplified in their capability to tackle complex tasks such as solving mathematical equations, where they demonstrate a step-by-step problem-solving approach. This interactive prowess not only enhances their utility in educational settings but also showcases the practical application of AI in learning and problem-solving scenarios.

Advancing Customization for Deeper Engagement

The journey of customization delves deeper in the ‘configure’ section, where you can refine the GPT’s instructions, start engaging conversations, and upload pertinent files to broaden the GPT’s knowledge horizon. This stage of customization transforms the GPT from a mere digital tool to an evolving, learning entity, capable of adapting and growing in response to the information and tasks presented to it.

Broadening Capabilities for Enhanced Functionality

In this transformative era, GPTs have evolved beyond mere text-based interactions. The integration of additional features such as web browsing capabilities, DALL-E 3 for generating visual content, and code interpretation greatly enhances their utility. The possibility of incorporating custom ChatGPT plugins further augments their functionality, offering an unparalleled level of versatility and adaptability.

Ensuring Data Privacy and Control

Managing data privacy and sharing preferences forms a critical component of the custom GPT experience. Users are endowed with the ability to define the privacy settings of their GPT, choosing between public, private, or link-accessible options. This flexibility addresses privacy concerns while also facilitating collaboration and sharing when desired, striking a balance between security and accessibility.

Validating Effectiveness through Real-world Testing

Testing the capabilities of your custom GPT in real-world scenarios, such as requesting it to elucidate complex subjects like integrals, serves as a testament to its adherence to the set instructions and the effectiveness of the knowledge it has assimilated. This phase of interaction is crucial in assessing the practicality and reliability of the custom GPT in actual use cases.

Facilitating Sharing and Enhancing Accessibility

The final aspect of this journey emphasizes the ease with which these custom GPTs can be shared, primarily through the use of shareable links, and their accessibility, highlighted by their presence in the user interface sidebar. These features underscore the user-friendly nature of the models and their practicality for day-to-day use, making them accessible tools for a wide range of users and applications.

Categories
News

NVIDIA shatters records, training AI models in under 4 Minutes

NVIDIA sets new AI training records

NVIDIA’s AI platform has once again demonstrated its capabilities by setting new records in the latest MLPerf industry benchmarks, a well-regarded measure for AI training and high-performance computing. The AI supercomputer, NVIDIA Eos, powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking, completed a training benchmark based on a GPT-3 model in a record-breaking 3.9 minutes. This significant improvement from the previous record demonstrates the potential for faster training times, which can reduce costs, save energy, and speed up product development, making it a game-changer in the industry.

These latest results were achieved by using the highest number of accelerators ever used in an MLPerf benchmark. This achievement underscores NVIDIA’s ability to meet the unique challenges of generative AI for the world’s largest data centers. Eos and Microsoft Azure used a full-stack platform of innovations in accelerators, systems, and software to achieve these groundbreaking results, showcasing the power of collaboration and technological advancement.

Training AI models

In this round, NVIDIA set several new records, further solidifying its position as a leader in the field. In addition to making significant advances in generative AI, the H100 GPUs were 1.6x faster than the previous round’s training recommender models. NVIDIA was the only company to run all MLPerf tests, further demonstrating its commitment to pushing the boundaries of AI technology.

six new performance record set by NVIDIA

The NVIDIA AI platform was used in submissions this round by eleven systems makers, including industry heavyweights such as ASUS, Dell Technologies, Fujitsu, GIGABYTE, Lenovo, QCT, and Supermicro. This widespread use of NVIDIA’s technology is a clear indication of its robustness, reliability, and acceptance in the industry, demonstrating its potential to revolutionize AI technology.

NVIDIA AI platform

In the MLPerf HPC, H100 GPUs delivered up to twice the performance of NVIDIA A100 Tensor Core GPUs. This performance boost was particularly noticeable when the H100 GPUs trained OpenFold, a model that predicts the 3D structure of a protein, in just 7.5 minutes, showcasing the power and efficiency of NVIDIA’s technology.

Several partners, including Dell Technologies, Clemson University, Texas Advanced Computing Center, Lawrence Berkeley National Laboratory, and Hewlett Packard Enterprise (HPE), made submissions on the NVIDIA AI platform in this round. This collaboration between NVIDIA and these organizations demonstrates the platform’s versatility and applicability across various sectors, highlighting its potential to transform industries.

MLPerf benchmarks

MLPerf benchmarks have received wide support from both industry and academia, with organizations such as Amazon, Arm, Baidu, Google, Harvard, HPE, Intel, Lenovo, Meta, Microsoft, NVIDIA, Stanford University, and the University of Toronto backing them. All the software NVIDIA used is available from the MLPerf repository, ensuring transparency and accessibility for all interested parties, further promoting the democratization of AI technology.

NVIDIA’s AI platform continues to set new standards in AI training and high-performance computing, as evidenced by its latest achievements in the MLPerf industry benchmarks. The platform’s ability to reduce costs, save energy, and speed up product development, coupled with its widespread adoption by system makers and partners, underscores its value and potential in advancing AI technology. This continued success is a testament to NVIDIA’s commitment to pushing the boundaries of AI technology and its potential to revolutionize industries.

Image Credit :  NVIDIA

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to make GPTs – custom ChatGPT AI models – no code

How to make GPTs - custom ChatGPT AI models easily, no coding required

ChatGPT is all well and good but what if you could customize it further to make it more specific for certain tasks? If you have thought this was a good idea you will be pleased to know that OpenAI has recently announced the launch of its new GTPs. A new way to create custom ChatGPT AI models no-code or the need to know or learn anything about coding.

The ability to tweak and customize AI models to your specific needs is not just a luxury; it’s a game changer. GPTs are custom versions of ChatGPT that can be molded to suit your everyday needs, be it for work, play, or personal growth. What makes GPTs stand out is their unique capacity to take on tasks ranging from explaining complex board game rules to assisting with math homework or even designing creative stickers. Best of all you don’t need to to know anything about coding all programming to be able to create these amazing GPTs.

How to make GPTs

For those intrigued by the prospect of building their own GPT, the process is surprisingly user-friendly. The creation of a GPT is as straightforward as engaging in a chat. You give it instructions, infuse it with additional knowledge, and choose its capabilities, such as web searching, image creation, or data analysis.

Here’s a closer look at the process:

  1. Start the conversation: Just like talking to a friend, you begin by telling your GPT what you need.
  2. Customize the skill set: Whether it’s solving algebra problems or planning a menu, you decide what your GPT should excel in.
  3. Enhance with extras: You can arm your GPT with more information or connect it to external APIs – with your control over the data shared.
  4. Put it to work: Your GPT is now ready to assist you, your company, or even the public if you choose to share it.

The desire for personalized AI has been brewing since ChatGPT’s inception. Initially, Custom Instructions allowed for some personalization, but the clamor for greater autonomy persisted. GPTs have answered this call, automating what once was a manual, prompt-driven operation.

Custom ChatGPT AI models

If you are wondering how you can be part of this innovative community, the good news is that anyone can contribute. The forthcoming GPT Store will be a marketplace for these creations, spotlighting the most innovative and practical GPTs in various categories, such as productivity and education.

Privacy and security are paramount in this new frontier. Conversations with GPTs remain private, and builders have the discretion to use chat data to refine their models, depending on user preferences. OpenAI is committed to enforcing usage policies to safeguard against misuse, ensuring a safe environment for users to explore and build.

Moreover, the platform is designed to be integrative, offering seamless connection with tools like Gmail, Slack, and Notion, thereby expanding the GPT’s utility. It’s backed by an AI language model trained on diverse internet texts, capable of learning and adapting to specialized datasets for specific tasks.

If you’re curious about how these special AI programs work, it’s important to know that they don’t know everything and they can’t get the very latest information. They also don’t really get the bigger picture outside of the immediate chat they’re having with you. But, they are really good at using information from the past to give smart and helpful answers to your questions.

Looking to the horizon, the field of AI is ever-evolving. Events like the Open AI Dev Day and advancements such as GPT 4 Turbo herald a future where AI’s role in our daily lives will be even more significant. And for the developers and tinkerers, the inclusion of tools like DallE 3 for image generation and Zapier AI actions for task automation opens up new realms of possibility.

The journey of creating your own GPT is one of exploration and innovation. With the right guide and tools at your disposal, the power to customize AI becomes not just accessible, but also a conduit for sharing your expertise and creativity with the world. To learn more about the introduction of OpenAI GPTs  jump over to the official website.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Stable 3D AI creates 3D models from text prompts in minutes

Stable 3D AI makes 3D models from text prompts

The ability to create 2D to images using AI has already been mastered and dominated by AI tools such as Midjourney, OpenAI’s latest DallE 3, Leonardo AI and Stable Diffusion. Now Stability AI the creators of Stable Diffusion are entering into the realm of creating 3D models from text prompts in just minutes, with the release of its new automatic 3D content creation tool in the form of Stable 3D AI. This innovative tool is designed to simplify the 3D content creation process, making the generation of concept-quality textured 3D objects more accessible than ever before.

A quick video has been created showing how simple 3D models can be created from text prompts similar to that used to create 2D AI artwork. 3D models are the next frontier for artificial intelligent and AI models to tackle. Stable 3D is an early glimpse of this transformation and is a game-changer in the realm of 3D modeling. Automating the process of creating 3D objects, a task that traditionally requires specialized skills and a significant amount of time.

Create 3D models from text prompts using AI

With Stable 3D, non-experts can create draft-quality 3D models in minutes. This is achieved by simply selecting an image or illustration, or writing a text prompt. The tool then uses this input to generate a 3D model, removing the need for manual modeling and texturing. The 3D objects created with Stable 3D are delivered in the standard “.obj” file format, a universal format compatible with most 3D software. These objects can then be further edited and enhanced using popular 3D tools such as Blender and Maya. Alternatively, they can be imported into a game engine such as Unreal Engine 5 or Unity for game development purposes.

Stable 3D not only simplifies the 3D content creation process but also makes it more affordable. The tool aims to level the playing field for independent designers, artists, and developers by empowering them to create thousands of 3D objects per day at a low cost. This could revolutionize industries such as game development, animation, and virtual reality, where the creation of 3D objects is a crucial aspect of the production process.

Other articles you may find of interest on the subject of Stability AI :

Stable 3D by Stability AI

The introduction of Stable 3D signifies a significant leap forward in 3D content creation and the ability to generate 3D models from text prompts in minutes is a testament to the advancements in artificial intelligence and its potential applications in digital content creation. We can only expect the 3D models to get even more complicated over the coming months moving from simple shapes into full complicated mesh models.

Currently, Stability AI has introduced a private preview of Stable 3D for interested parties. To request access to the Stable 3D private preview, individuals or organizations can visit the Stability AI contact page. This provides an opportunity to explore the tool’s capabilities firsthand and to understand how it can streamline the 3D content creation process.

Stable 3D is a promising tool that has the potential to revolutionize 3D content creation. By automating the generation of 3D objects and making the process accessible to non-experts, it is paving the way for a new era in digital content creation. Its compatibility with standard 3D file formats and editing tools further enhances its usability, making it a valuable asset for independent designers, artists, and developers. As Stable 3D continues to evolve, it is expected to significantly contribute to the digital content landscape.

As soon more information on the quality of the renderings and how it can be used are revealed we will keep you up to speed as always. In the meantime jump over to the official Stability AI website for more details.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to run AI models on a Raspberry Pi and single board computers (SBC)

running AI models on single board computers SBC

If you are looking for a project to keep you busy this weekend you might be interested to know that it is possible to run artificial intelligence in the form of large language models (LLM) on small single board computers (SBC) such as the Raspberry Pi and others. With the launch of the new Raspberry Pi 5 this month its now possible to carry out more power intensive tasks to its increased performance.

Although before you start it’s worth remembering that running AI models, particularly large language models (LLMs), on a Raspberry Pi or other SBCs presents an interesting blend of challenges and opportunities. While you trade off computational power and convenience, you gain in terms of cost-effectiveness, privacy, and hands-on learning. It’s a field ripe for exploration, and for those willing to navigate its limitations, the potential for innovation is significant.

One of the best ways of accessing ChatGPT from your Raspberry Pi setting up a connection to the OpenAI API, building programs using Python, JavaScript and other programming languages to connect to ChatGPT remotely. Although if you are looking for a a more locally installed more secure version which runs AI directly on your mini PC you will need to select a lightweight LLM that is capable of running and answering your queries more effectively.

Running AI models on a Raspberry Pi

Watch the video below to learn more about how this can be accomplished thanks to Data Slayer if you are interested in learning more about how to utilize the power of your mini PC I deftly recommend you check out his other videos.

Other articles we have written that you may find of interest on the subject of Raspberry Pi 5 :

Before diving in, it’s important to outline the challenges. Running a full-scale LLM on a Raspberry Pi is not as straightforward as running a simple Python script. These challenges are primarily:

  • Limited Hardware Resources: Raspberry Pi offers less computational power compared to typical cloud-based setups.
  • Memory Constraints: RAM can be a bottleneck.
  • Power Consumption: LLMs are known to be energy-hungry.

Benefits of running LLM is on single board computers

Firstly, there’s the compelling advantage of affordability. Deploying AI models on cloud services can accumulate costs over time, especially if you require significant computational power or need to handle large data sets. Running the model on a Raspberry Pi, on the other hand, is substantially cheaper in the long run. Secondly, you gain the benefit of privacy. Your data never leaves your local network, a perk that’s especially valuable for sensitive or proprietary information. Last but not least, there’s the educational aspect. The hands-on experience of setting up the hardware, installing the software, and troubleshooting issues as they arise can be a tremendous learning opportunity.

Drawbacks due to the lack of computational power

However, these benefits come with distinct drawbacks. One major issue is the limited hardware resources of Raspberry Pis and similar SBCs. These devices are not designed to be powerhouses; they lack the robust computational capabilities of a dedicated server or even a high-end personal computer. This limitation is particularly pronounced when it comes to running Large Language Models (LLMs), which are notorious for their appetite for computational resources. Memory is another concern; Raspberry Pis often come with a limited amount of RAM, making it challenging to run data-intensive models. Furthermore, power consumption can escalate quickly, negating some of the cost advantages initially gained by avoiding cloud services.

Setting up your mini PC

Despite these challenges, there have been advancements that make it possible to run LLMs on small computers like Raspberry Pi. One notable example is the work of Georgie Gregov, who ported the Llama model, a collection of private LLMs shared by Facebook, to C++. This reduced the size of the model significantly, making it possible to run on tiny devices like Raspberry Pi.

Running an LLM on a Raspberry Pi is a multi-step process. First, the Ubuntu server is loaded onto the Raspberry Pi. An external drive is then mounted to the Pi, and the model is downloaded to the drive. The next step involves cloning a git repo, compiling it, and moving the model into the repo file. Finally, the LLM is run on the Raspberry Pi. While the process might be a bit slow, it can handle concrete questions well.

It’s important to note that LLMs are still largely proprietary and closed-source. While Facebook has released an open-source version of its Llama model, many others are not publicly available. This can limit the accessibility and widespread use of these models. One notable example is the work of Georgie Gregov, who ported the Llama model, a collection of private LLMs shared by Facebook, to C++. This reduced the size of the model significantly, making it possible to run on tiny devices like Raspberry Pi.

Running AI models on compact platforms like Raspberry Pi and other single-board computers (SBCs) presents a fascinating mix of advantages and limitations. On the positive side, deploying AI locally on such devices is cost-effective in the long run, eliminating the recurring expenses associated with cloud-based services. There’s also an increased level of data privacy, as all computations are carried out within your own local network. Additionally, the hands-on experience of setting up and running these models offers valuable educational insights, especially for those interested in the nitty-gritty of both hardware and software.

However, these benefits come with their own set of challenges. The most glaring issue is the constraint on hardware resources, particularly when attempting to run Large Language Models (LLMs). These models are computational and memory-intensive, and a Raspberry Pi’s limited hardware isn’t built to handle such heavy loads. Power consumption can also become an issue, potentially offsetting some of the initial cost benefits.

In a nutshell, while running AI models on Raspberry Pi and similar platforms is an enticing proposition that offers affordability, privacy, and educational value, it’s not without its hurdles. The limitations in computational power, memory, and energy efficiency can be significant, especially when dealing with larger, more complex models like LLMs. Nevertheless, for those willing to tackle these challenges, the field holds considerable potential for innovation and hands-on learning.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

LM Studio makes it easy to run AI models locally on your PC, Mac

LM Studio makes it easy to run AI models locally on your PC Mac and Linux

If you are interested in trying out the latest AI models and large language models that have been trained in different ways. Or would simply like one of the open source AI models running locally on your home network. Assisting you with daily tasks. You will be pleased to know that it is really easy to run LLM and hence AI Agents on your local computer without the need for third-party servers. Obviously the more powerful your laptop or desktop computer have the better, but as long as you have 8GB of RAM as a minimum you should be able to run at least one or two smaller AI models such as Mistral and others.

Running AI models locally opens up opportunities for individuals and small businesses to experiment and innovate with AI without the need for expensive servers or cloud-based solutions. Whether you’re a student, an AI enthusiast, or a professional researcher, you can now easily run AI models on your PC, Mac, or Linux machine.

One of the most user-friendly tools for this purpose is LM Studio, a software that allows you to install and use a variety of AI models. With a straightforward installation process, you can have LM Studio set up on your computer in no time. It supports a wide range of operating systems, including Windows, macOS, and Linux, making it accessible to a broad spectrum of users.

The user interface of LM Studio is designed with both beginners and advanced users in mind. The advanced features are neatly tucked away, so they don’t overwhelm new users but are easily accessible for those who need them. For instance, you can customize options and presets to tailor the software to your specific needs.

LM Studio dashboard

LM Studio dashboard chat box

Other articles we have written that you may find of interest on the subject of running AI models locally

LM Studio supports several AI models, including large language models. It even allows for running quantized models in GF format, providing a more efficient way to run these models on your computer. The flexibility to download and add different models is another key feature. Whether you’re interested in NLP, image recognition, or any other AI application, you can find a model that suits your needs.

Search for  AI models and LLMs

LM Studio dashboard

AVX2 support required

Your computer will need to support AVX2 here are a few ways to check what CPU or system is running. Once you know you can then do a quick search to see if the specifications list support for AVX2. You can also ask ChatGPT once you know your CPU.  obviously CPUs after it’s OpenAI’s cut-off date are most likely to support AVX2.

Windows:

  1. Open the Command Prompt.
  2. Run the command systeminfo.
  3. Look for your CPU model in the displayed information, then search for that specific CPU model online to find its specifications.

macOS:

  1. Go to the Apple Menu -> About This Mac -> System Report.
  2. Under “Hardware,” find the “Total Number of Cores” and “Processor Name.”
  3. Search for that specific CPU model online to check its specifications.

Linux:

  1. Open the Terminal.
  2. Run the command lscpu or cat /proc/cpuinfo.
  3. Check for the flag avx2 in the output.

Software Utility:

You can use third-party software like CPU-Z (Windows) or iStat Menus (macOS) to see detailed specifications of your CPU, including AVX2 support.

Vendor Websites:

Visit the CPU manufacturer’s website and look up your CPU model. Detailed specifications should list supported instruction sets.

Direct Hardware Check:

If you have the skill and comfort level to do so, you can directly check the CPU’s markings and then refer to vendor specifications.

For Windows users with an M2 drive, LM Studio can be run on this high-speed storage device, providing enhanced performance. However, as mentioned before, regardless of your operating system, one crucial factor to consider is the RAM requirement. As a rule of thumb, a minimum of 8 GB of RAM is recommended to run smaller AI models such as Mistral. Larger models may require more memory, so it’s important to check the specifications of the models you’re interested in using.

In terms of model configuration and inference parameters, LM Studio offers a range of options. You can tweak these settings to optimize the performance of your models, depending on your specific use case. This level of control allows you to get the most out of your AI models, even when running them on a personal computer.

One of the most powerful features of LM Studio is the ability to create a local host and serve your model through an API. This means you can integrate your model into other applications or services, providing a way to operationalize your AI models. This feature transforms LM Studio from a mere tool for running AI models locally into a platform for building and deploying AI-powered applications.

Running AI models locally on your PC, Mac, or Linux machine is now easier than ever. With tools like LM Studio, you can experiment with different models, customize your settings, and even serve your models through an API. Whether you’re a beginner or a seasoned AI professional, these capabilities open up a world of possibilities for innovation and exploration in the field of AI.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Convert 2D images into 3D models you can use in Blender

Convert flat AI images into 3D models you can use in Blender

If your creative workflow requires you to build 3D models you might be interested in a new AI tool currently in its early stages of development. Harnessing the power of AI and machine learning you can now  transform simple flat 2D images into 3D models. Enabling you to take any images you may have created with AI image generators such as Midjourney, Stable Diffusion or even the new DallE 3 into 3D models. That you can then take into 3D modelling software such as Blender.

But before you start you have to remember that this machine learning AI tool is still in early development and doesn’t completely transform any flat 2D image into a fantastic 3D model ready to take to production. However in its latest version it is still capable of creating 3D models from clean and simple flat images you can see a few examples here. Although a more complex 3D images it currently does struggle. Although more complicated 3D images, that may not have worked perfectly will still be capable of being modified further in Blender perhaps give you a starting point for scale and at least a 3D model that you can then manipulate further.

The application is called the DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation which is quite a mouthful. It is available to use for free over on the Hugging Face website and the online application has been built using Gradio. If you have not come across Gradio, it has been specifically designed to provide an easy way to build and demonstrate your machine learning models, enabling you to incorporate a user-friendly web interface so that anyone can use it. Check out the demonstration video below to learn more about its capabilities and limitations.

Converting 2D images into 3D models using AI

DreamGaussian is a 3D content generation framework AI tool has been designed to be user-friendly, requiring users to simply drag and drop an image and click Generate 3D to start the process rolling.

Other articles we have written that you may find of interest on the subject of AI tools :

Transforming 2D images into 3D models

The transition from 2D images to manipulable 3D models would represent a quantum leap in the workflow for 3D designers, modelers, and production teams. In traditional 3D modeling, the process often begins with a concept sketch or a 2D image. Designers then have to manually interpret these flat visuals and reconstruct them in a 3D environment. This involves a significant amount of time and expertise to ensure that the 3D model accurately represents the original concept. The manual process is not only labor-intensive but can also introduce errors or inconsistencies that may require further revisions.

With the capability to automatically convert 2D images into 3D models, you essentially remove a large chunk of the manual labor involved. Imagine simply importing a 2D sketch into software like Blender and having it automatically converted into a 3D model that’s ready for manipulation. This would dramatically accelerate the initial stages of design and modeling, allowing professionals to focus more on refining and enhancing the model rather than constructing it from scratch. It would also make the entire design process more accessible to those who may be skilled in concept creation but not necessarily experts in 3D modeling software.

Furthermore, this advancement could streamline collaboration across different teams in a production pipeline. For instance, concept artists, modelers, and animators could work more cohesively, as they would all be dealing with the same automatically-generated base model. This ensures that everyone is on the same page from the get-go, thereby reducing misunderstandings and back-and-forths. In industries like film, gaming, and virtual reality, where time is often of the essence, such efficiencies could translate into significant cost savings and quicker time-to-market for products.

Despite these limitations, the potential of DreamGaussian is undeniable. The tool could potentially be a fantastic AI service to create 3D models from images. This could open up an entirely new avenue for digital artists and graphic designers, providing them with a new way to improve their workflows. Obviously the machine learning tool requires more work, yet provides a fantastic glimpse at what we can expect in the future for the creation of 3D models. Now that 2D images so easy to create converting them to 3D models seems to be the next logical step.

DreamGaussian represents a significant advancement in the field of 3D content creation, harnessing the power of AI. Enabling users to create 3D models from 2D images. While there are areas for improvement, the tool’s potential for enhancing the efficiency of 3D creation and opening up new opportunities is undeniable.

Image Credit : Mr Lemon

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Leonardo Ai Alchemy 2 and new custom SDXL models announced

Leonardo Ai Alchemy 2 brings more control and detail to your AI art

Leonardo AI has just announced the release of Alchemy 2, a new standard in creative output. This advanced pipeline, which has been a celebrated tool in the creative process, is now stepping up its game with an unprecedented level of detail and control for AI art creators. The release of Leonardo Ai Alchemy 2 is not just an update, but a new standard in the creative industry, say it’s creators.

Alchemy 2 has been designed to enhance designs with remarkable high resolution, contrast boost, resonance, and more. It is the ideal tool for both novice and experienced creators, providing comprehensive understanding of each element of Alchemy. What’s more, this new release comes with a playful feature that promises endless fun.

New custom SDXL models

One of the most exciting updates in this release is the evolution of the signature pipeline Alchemy. This pipeline has consistently been a celebrated tool among creators, and with Alchemy V2, Leonardo AI is taking another significant stride in advancing creative output. To complement this, the company has also unveiled two new custom SDXL models — Leonardo Diffusion XL and Leonardo Vision XL.

The Alchemy V2 represents a big leap in high-quality image generation. Paired with an extensive toolkit that includes Elements, Canvas, and more, it’s a creative powerhouse. High Resolution is an integral feature of Leonardo Alchemy, which toggles between a 1.5x and 2x resolution increase. This feature enhances the output resolution of the Alchemy procedure, delivering richer and denser images.

However, it’s important to note that high-resolution outputs will differ from their normal resolution counterparts due to the diffusion process involved in the generation. Therefore, High Resolution cannot be expected to function as an upscaler.

Leonardo Ai Alchemy V2

The goal of Leonardo AI remains clear: to simplify while elevating creativity. With Alchemy V2, this promise is further strengthened. Users can choose from the above models and Alchemy V2 automatically engages. All that’s required is to prepare the prompts and watch the magic unfold.

Other articles we have written that you may find of interest on the subject of Stable Diffusion XL:

Leonardo Alchemy

With Alchemy, users also have two unique upscalers at their disposal: Alchemy Crisp and Alchemy Smooth. These were developed specifically for Alchemy to enhance the images during the upscaling process. Alchemy Crisp is ideal for images with lots of texture detail, including photos, digital art, and some 3D renders. Alchemy Smooth, on the other hand, is best suited for images with smooth textures, including illustrative, anime, and cartoon-like images.

As with the original Leonardo Ai Alchemy 2 features SDXL, an open source AI art image model created by Stability AI. SDXL 1.0, the flagship image model developed by Stability AI, stands as the pinnacle of open models for image generation. Through extensive testing and comparison with various other models, the conclusive results show that people overwhelmingly prefer images generated by SDXL 1.0 over other open models. With better prompt adherence, image quality, and complexity of output, SDXL 1.0 ticks all the boxes.

“With Stable Diffusion XL, you can create descriptive images with shorter prompts and generate words within images. The model is a significant advancement in image generation capabilities, offering enhanced image composition and face generation that results in stunning visuals and realistic aesthetics.”

The release of Leonardo Ai Alchemy 2 is set to refine and expand the artistic AI workflow of creators even further. With its advanced features and tools, Alchemy V2 and new Leonardo Diffusion XL and Leonardo Vision XL custom SDXL models.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.