Categories
News

M3 MacBook Air Models Now Arriving to Customers in New Zealand and Australia

[ad_1]

It is Friday, March 8 in New Zealand and Australia, which means customers who pre-ordered one of the new machines in those two countries are receiving their MacBook Air models.

Apple MacBook Air 2 up hero 240304 feature
Introduced on Monday of this week, the updated 13.6-inch and 15.3-inch ‌MacBook Air‌ models are equipped with the same M3 chip that was introduced in the MacBook Pro late last year.

There are no external changes to the ‌MacBook Air‌, with Apple instead focusing on internal updates. The M3 chip is up to 30 percent faster than the M2 chip in terms of CPU performance, and there are notable GPU improvements with Apple adding support for Dynamic Caching, hardware-accelerated ray tracing, hardware-accelerated mesh shading, and support for AV1 decode.

Other improvements to the ‌MacBook Air‌ include support for two external displays when the machine is used in clamshell mode, support for Wi-Fi 6E, enhanced voice clarity for audio and video calls, and a new anodization seal to reduce fingerprints on the Midnight finish.

Apple retail stores in Australia are selling the new ‌MacBook Air‌ machines, and there is plenty of stock for walk-in customers. Apple does not operate stores in New Zealand, so customers in that country need to order online.

Following New Zealand and Australia, sales and deliveries of the new ‌‌MacBook Air‌ models will launch in Asia, the Middle East, Europe, and finally, North America.

We’ll be sharing a hands-on review of the new M3 ‌MacBook Air‌ in the morning after picking up one of the new devices.

Popular Stories

Apple Changes Trade-In Values for iPhones, iPads, and Apple Watches

Apple today adjusted its estimated trade-in values for select iPhone, iPad, and Apple Watch models in the U.S., with the changes reflected on its website. Apple slightly increased trade-in values for the iPhone 14 Pro Max, second-generation iPhone SE, Apple Watch Series 4, and first-generation Apple Watch SE, while trade-in values slightly decreased for the entry-level iPad, Apple Watch…

iOS 17.4 Out Now: 10 New Things Your iPhone Can Do

Apple has released iOS 17.4, its biggest iPhone software update of the year so far, featuring a number of features and changes that users have been anticipating for quite a while. Below, we’ve listed 10 new things that your iPhone will be able to do after you’ve installed the update, which became available on Tuesday, March 5. Be sure to check Settings ➝ General ➝ Software Update on your …

Apple Releases macOS Sonoma 14.4

Apple today released macOS Sonoma 14.4, the fourth major update to the macOS Sonoma operating system that launched last September. macOS Sonoma 14.4 comes over a month after macOS Sonoma 14.3, an update that brought collaborative Apple Music playlists. The ‌‌‌‌macOS Sonoma‌‌ 14.4‌ update can be downloaded for free on all eligible Macs using the Software Update section of…

Apple Releases tvOS 17.4

Apple today released tvOS 17.4, the fourth major update to the tvOS 17 operating system that came out last September. tvOS 17.4 comes over a month after the release of tvOS 17.3. tvOS 17.4 can be downloaded using the Settings app on the ‌Apple TV‌. Go to System > Software Update to get the new software. ‌Apple TV‌ owners who have automatic software updates activated will be upgraded to…

Kuo: Apple Planning 20-Inch MacBook With Foldable Screen

Apple is planning to release a 20-inch MacBook with a foldable screen in around three years, according to Apple analyst Ming-Chi Kuo. “Apple’s only foldable product with a clear development schedule is the 20.3-inch MacBook, expected to enter mass production in 2027,” said Kuo, in a post on X today. He did not provide any additional details about the device. Kuo is the third source to…

[ad_2]

Source Article Link

Categories
News

M3 MacBook Air Models Now Available for Same-Day Pickup

[ad_1]

Starting today, Apple’s refreshed MacBook Air models with M3 chips are available for same-day or next-day pickup at Apple Stores, with no pre-order required. Online orders are also beginning to arrive to customers today.

macbook air new blue
Customers across the United States, Canada, Europe, Asia, and other regions can now place an order on Apple’s website or in the Apple Store app and arrange for in-store pickup at a local retail location.

To order a product with ‌Apple Store‌ pickup, add the product to your bag on Apple.com, proceed to checkout, select the “I’ll pick it up” option, enter your ZIP code, choose an available ‌Apple Store‌ location, and select a pickup date. Payment is completed online, and a valid government-issued photo ID and the order number may be required upon pickup.

In addition to the M3 chip, the new 13-inch and 15-inch ‌‌MacBook Air‌ models‌ offer Wi-Fi 6E, Voice Isolation and Wide Spectrum microphone modes, enhanced voice clarity in audio and video calls, and a more fingerprint-resistant finish with the Midnight color option. They also now support up to two external displays when the laptop lid is closed, increasing from just a single external display on the previous Apple silicon models.

There are no major external design changes to the M3 ‌MacBook Air‌ models, which continue to be available in Midnight, Starlight, Space Gray, and Silver colors. Pricing for the 13-inch ‌MacBook Air‌ starts at $1,099, while the 15-inch ‌MacBook Air‌ starts at $1,299.

[ad_2]

Source Article Link

Categories
News

Apple to Produce 8.5 Million OLED iPad Pro Models This Year

[ad_1]

Apple has ordered an initial 8.5 million OLED display panels from South Korean suppliers for its upcoming redesigned iPad Pro models, which are expected to arrive as soon as this month. The refresh will mark the biggest design update to the Pro lineup since 2018.

iPad Pro OLED Feature 2
Apple is relying on different OLED display suppliers for the upcoming ~11-inch and ~13-inch iPad Pro models, with Samsung Display exclusively producing ~11-inch panels and LG Display making the ~13-inch panels. Based on current orders, Samsung will produce 4 million units in 2024, while LG will make 4.5 million units for the year, perhaps suggesting that the larger model is forecast to be slightly more popular.

Industry insiders claim the division of labor is due to changes in Apple’s demand outlook for OLED iPad Pro models, as well as the unstable production capacity and yield of the two suppliers, which are both still getting to grips with Apple’s requirement for new panel technologies.

Apple is rumored to be aiming for “unrivaled” display quality, as well as a design that cuts down on the thickness and weight of its ‌iPad Pro models. Recent leaked CAD drawings of the upcoming models offer a better idea of just how thin the tablets will be. The larger ‌iPad Pro‌, for example, will be over 1mm thinner.

The number of OLED panels Apple orders from Samsung and LG may change after production of the initial quantity, depending on fluctuations in production yield and possible adjustments to Apple’s demand forecast for the new ‌OLED ‌iPad‌ Pro‌ models. Apple’s latest shipments forecast is said to have been a decrease from the 10 million units that were projected for 2024 last year.

Apple’s ‌iPad Pro‌ models are also expected to be upgraded with faster 3-nanometer M3 chips, and MagSafe charging is a possibility. Apple is also expected to sell the devices with a new Magic Keyboard and an upgraded Apple Pencil. For all the details, refer to our OLED iPad Pro guide.

(Via DigiTimes.)

Popular Stories

Apple Changes Trade-In Values for iPhones, iPads, and Apple Watches

Apple today adjusted its estimated trade-in values for select iPhone, iPad, and Apple Watch models in the U.S., with the changes reflected on its website. Apple slightly increased trade-in values for the iPhone 14 Pro Max, second-generation iPhone SE, Apple Watch Series 4, and first-generation Apple Watch SE, while trade-in values slightly decreased for the entry-level iPad, Apple Watch…

iOS 17.4 Out Now: 10 New Things Your iPhone Can Do

Apple has released iOS 17.4, its biggest iPhone software update of the year so far, featuring a number of features and changes that users have been anticipating for quite a while. Below, we’ve listed 10 new things that your iPhone will be able to do after you’ve installed the update, which became available on Tuesday, March 5. Be sure to check Settings ➝ General ➝ Software Update on your …

Apple Releases macOS Sonoma 14.4

Apple today released macOS Sonoma 14.4, the fourth major update to the macOS Sonoma operating system that launched last September. macOS Sonoma 14.4 comes over a month after macOS Sonoma 14.3, an update that brought collaborative Apple Music playlists. The ‌‌‌‌macOS Sonoma‌‌ 14.4‌ update can be downloaded for free on all eligible Macs using the Software Update section of…

Apple Releases tvOS 17.4

Apple today released tvOS 17.4, the fourth major update to the tvOS 17 operating system that came out last September. tvOS 17.4 comes over a month after the release of tvOS 17.3. tvOS 17.4 can be downloaded using the Settings app on the ‌Apple TV‌. Go to System > Software Update to get the new software. ‌Apple TV‌ owners who have automatic software updates activated will be upgraded to…

Kuo: Apple Planning 20-Inch MacBook With Foldable Screen

Apple is planning to release a 20-inch MacBook with a foldable screen in around three years, according to Apple analyst Ming-Chi Kuo. “Apple’s only foldable product with a clear development schedule is the 20.3-inch MacBook, expected to enter mass production in 2027,” said Kuo, in a post on X today. He did not provide any additional details about the device. Kuo is the third source to…

[ad_2]

Source Article Link

Categories
News

What is Alibaba Qwen and its 6 LLM AI models?

[ad_1]

Alibaba Qwen 1.5 powerful AI model

Alibaba’s Qwen 1.5 is an enhanced version of their large language model series known as Qwen AI, developed by the Qwen team under Alibaba Cloud. It marks a significant advancement in language model technology, offering a range of models with varying sizes, including 0.5 billion to 72 billion parameters. This breadth of model sizes aims to cater to different computational needs and applications, showcasing impressive AI capabilities such as :

  • Open-Sourcing: In line with Alibaba’s initiative to contribute to the open-source community, Qwen 1.5 has been made available across six sizes: 0.5B, 1.8B, 4B, 7B, 14B, and 72B parameters. This approach allows for widespread adoption and experimentation within the developer community.
  • Improvements and Capabilities: Compared to its predecessors, Qwen AI 1.5 introduces significant improvements, particularly in chat models. These enhancements likely involve advancements in understanding and generating natural language, enabling more coherent and contextually relevant conversations.
  • Multilingual Support: Like many contemporary large language models, Qwen 1.5 is expected to support multiple languages, facilitating its adoption in global applications and services.
  • Versatility: The availability of the model in various sizes makes it versatile for different use cases, from lightweight applications requiring rapid responses to more complex tasks needing deeper contextual understanding.

Alibaba Large Language Model

Given its positioning and the features outlined, Qwen AI 1.5 represents Alibaba Cloud’s ambition to compete in the global AI landscape, challenging the dominance of other major models with its comprehensive capabilities and open-source accessibility. Lets take a deeper dive into the workings of the Qwen 1.5 AI model. Here are  just a few features of the large language model :

  • Integration of Qwen1.5’s code into Hugging Face transformers for easier access.
  • Collaboration with various frameworks for deployment, quantization, finetuning, and local inference.
  • Availability on platforms like Ollama and LMStudio, with API services on DashScope and together.ai.
  • Improvements in chat models’ alignment with human preferences and multilingual capabilities.
  • Support for a context length of up to 32768 tokens.
  • Comprehensive evaluation of model performance across various benchmarks and capabilities.
  • Competitive performance of Qwen1.5 models, especially the 72B model, in language understanding, reasoning, and math.
  • Strong multilingual capabilities demonstrated across 12 languages.
  • Expanded support for long-context understanding up to 32K tokens.
  • Integration with external systems, including performance on RAG benchmarks and function calling.
  • Developer-friendly integration with Hugging Face transformers, allowing for easy model loading and use.
  • Support for Qwen1.5 by various frameworks and tools for both local and web deployment.
  • Encouragement for developers to utilize Qwen1.5 for research or applications, with resources provided for community engagement.

Qwen 1.5 AI model

Imagine you’re working on a complex project that requires understanding and processing human language. You need a tool that can grasp the nuances of conversation, respond in multiple languages, and integrate seamlessly into your existing systems. Enter Alibaba’s latest innovation: Qwen1.5, a language model that’s set to redefine how developers and researchers tackle natural language processing tasks. You might also be interested in a new platform built on the Qwen 1.5, providing usres with an easy way to build custom AI agents with Qwen-Agents.

Qwen1.5 is the newest addition to the Qwen series, and it’s a powerhouse. It comes in a variety of sizes, ranging from a modest 0.5 billion to a colossal 72 billion parameters. What does this mean for you? It means that whether you’re working on a small-scale application or a massive project, there’s a Qwen1.5 model that fits your needs. And the best part? It works hand-in-hand with Hugging Face transformers and a range of deployment frameworks, making it a versatile tool that’s ready to be a part of your tech arsenal.

Now, let’s talk about accessibility. Alibaba has taken a significant step by open-sourcing the base and chat models of Qwen1.5. You can choose from six different sizes, and there are even quantized versions available for efficient deployment. This is great news because it opens up the world of advanced technology to you without breaking the bank. You can innovate, experiment, and push the boundaries of what’s possible, all while keeping costs low.

Integration with Multiple Frameworks

Integration is a breeze with Qwen1.5. It’s designed to play well with multiple frameworks, which means you can deploy, quantize, fine-tune, and run local inference without a hitch. Whether you’re working in the cloud or on edge devices, Qwen1.5 has got you covered. And with support from platforms like Ollama and LMStudio, as well as API services from DashScope and together.ai, you have a wealth of options at your fingertips for using and integrating these models into your projects.

But what about performance? Qwen1.5 doesn’t disappoint. The chat models have been fine-tuned to align closely with human preferences, and they offer robust support for 12 different languages. This is ideal for applications that require interaction with users from diverse linguistic backgrounds. Plus, with the ability to handle up to 32,768 tokens in context length, Qwen1.5 can understand and process lengthy conversations or documents with ease.

Rigourous Evaluations and Impressive Results

Alibaba didn’t just stop at creating a powerful model; they put it to the test. Qwen1.5 has undergone rigorous evaluation, and the results are impressive. The 72 billion parameter model, in particular, stands out with its exceptional performance in language understanding, reasoning, and mathematical tasks. Its ability to integrate with external systems, like RAG benchmarks and function calling, further highlights its strength and adaptability.

Qwen1.5 is not just a tool for machines; it’s a tool for people. It’s been crafted with developers at its core. Its compatibility with Hugging Face transformers and a variety of other frameworks and tools ensures that it’s accessible for developers who need to deploy models either locally or online. Alibaba is committed to supporting the use of Qwen1.5 for both research and practical applications. They’re fostering a community where innovation and collaboration thrive, driving collective progress in the field.

Alibaba’s Qwen1.5 is more than just an upgrade; it’s a leap forward in language model technology. It brings together top-tier performance and a developer-centric design. With its comprehensive range of model sizes, enhanced alignment with user preferences, and extensive support for integration and deployment, Qwen1.5 is a versatile and powerful tool. It’s poised to make a significant impact in the realm of natural language processing, and it’s ready for you to put it to the test. Whether you’re a seasoned developer or a curious researcher, Qwen1.5 could be the key to unlocking new possibilities in your work. So why wait? Dive into the world of Qwen1.5 and see what it can do for you.

Filed Under: Technology News, Top News





Latest Geeky Gadgets Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.



[ad_2]

Source Article Link

Categories
News

Claude 3 vs ChatGPT vs Gemini AI models compared

Claude 3 vs ChatGPT vs Gemini AI models compared

Let’s dive into the world of Claude 3, developed by a company called Anthropic. They’ve created three versions of this model: Haiku, Sonnet, and Opus. Each one is tailored for different uses, with Opus being the most advanced and requiring a subscription. When it comes to benchmarks, which are tests to measure performance, Claude 3 is a standout, particularly in coding tasks. It understands and follows complex instructions better than its competitors, which is a big deal for people who write software.

Features of Claude 3

  • Claude 3 AI models surpass benchmarks, outperforming competitors like GPT-4 and Gemini’s 1.0 Ultra.
  • The family includes Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, with increasing intelligence and cost.
  • Claude 3 Opus exhibits near-human comprehension and fluency in complex tasks.
  • All models show improved analysis, forecasting, content creation, and multilingual communication.
  • New vision capabilities allow processing of various visual formats, aiding enterprise customers.
  • Claude 3 models can perform complex multimodal analyses and utilize sub-agents for parallel task execution.
  • The models offer near-instant response times, with Claude 3 Haiku being the fastest and most cost-effective.
  • Claude 3 models are less likely to refuse prompts and show improved accuracy and recall capabilities.
  • Potential applications include task automation, interactive coding, research, strategy, data processing, customer interactions, and more.
  • Claude 3’s advanced features and performance set a new standard in AI, indicating rapid progress in the field.

But Claude 3 isn’t just about words. It has a unique ability to work with images, too. This means it can understand and create content that includes pictures, which is a step forward for AI and opens up new possibilities for different industries. When it first came out, Claude 3 could handle a huge amount of information at once, and there are plans to increase this capacity even more. This is exciting because it means Claude 3 could become even more versatile and useful in the future.

Claude 3 vs ChatGPT vs Gemini

Here are some other articles you may find of interest on the subject of Anthropic AI models :

Claude 3

  • Claude 3 has been hailed as a significant advancement over its predecessors and competitors, with notable strengths in optical character recognition (OCR), nuanced understanding of complex queries, and improved performance in benchmarks. For example, it accurately recognized license plate numbers and a barber pole in an image, indicating superior vision capabilities and context understanding. Despite this, Claude 3, like its competitors, showed limitations in detecting subtle details such as weather conditions in an image. Claude 3’s benchmarks suggest it outperforms Gemini and ChatGPT in many areas, particularly in coding and OCR tasks.
  • ChatGPT (GP-4) offers robust conversational capabilities and a broad knowledge base. While it may not excel in OCR tasks as Claude 3 does, it remains a versatile tool for a wide range of text-based applications, including writing, summarization, and question-answering. ChatGPT’s conversational nature makes it highly adaptable and user-friendly, though it sometimes lags in specific technical benchmarks compared to the latest Claude 3.
  • Gemini 1.0 Ultra and the unreleased Gemini 1.5 show strong performance in vision tasks and general AI capabilities. However, the introduction of Claude 3 has put Gemini’s capabilities into perspective, particularly in areas like OCR and context-specific queries. While Gemini 1.5 Pro shows improvements over its predecessor, it still faces challenges in competing with Claude 3’s advanced reasoning and OCR capabilities.

Coding Performance

When it comes to doing tasks, Claude 3 is impressive. In coding, it’s not just accurate; it also has style. It can understand and execute detailed programming tasks, which is a huge help for developers. In writing, whether it’s technical documents or creative stories, Claude 3 can produce high-quality content with ease.

Specialization Areas

  • Claude 3 demonstrates significant advancements in handling complex queries and specialized tasks such as OCR and reasoning with images. It comes in three models—Haiku, Sonnet, and Opus—each tailored to different levels of complexity and use cases. This stratification allows users to choose the most appropriate model for their specific needs, from simple queries to complex analysis.
  • ChatGPT excels in creating conversational AI that can engage in detailed discussions, answer a wide range of questions, and generate human-like text. Its strength lies in its adaptability across various domains, though it may not match Claude 3’s capabilities in vision-related tasks or the specific benchmarks where Claude 3 leads.
  • Gemini has been a strong contender in blending textual and visual information processing. While it continues to perform well in vision tasks, the emergence of Claude 3 has highlighted areas for improvement, especially in tasks that require a deeper contextual understanding and precision.

Request Performance

Another point where Claude 3 shines is in its confidence. It has lower refusal rates, which means it says it can’t accomplish certain tasks less often than other models. This shows that it’s more capable of handling a variety of requests. While all this sounds promising, there’s more to come. Experts are planning to do a detailed comparison between Claude 3, GPT-4, and Gemini Ultra. This will give us a clearer picture of how each model performs in different situations, helping you decide which one would be the best fit for your needs.

Cost and Accessibility

  • Claude 3‘s pricing model is designed to cater to both casual users and enterprises, with its premium Opus model requiring a subscription. This approach allows users to scale their usage according to their needs, although the higher cost for Opus reflects its advanced capabilities.
  • ChatGPT and Gemini both offer tiered pricing models to accommodate different levels of usage and capabilities. The cost structure of these models typically varies based on API usage, with specific pricing strategies aimed at making these tools accessible while offering scalable solutions for developers and businesses.

Ethical Considerations and Limitations

Each model incorporates ethical considerations and limitations to prevent misuse. Claude 3, for example, has been noted for its low false refusal rates and sensitivity to ethical guidelines, even in challenging scenarios. However, all models, including Claude 3, face challenges in completely eliminating bias and ensuring equitable treatment across diverse queries.

Claude 3 Benchmarks

If you’re interested in learning more about the performance you can expect from the three AI models that make up the Claude 3 family. Check out our previous article which explores the arrival of the new flagship AI  and Claude 3 benchmarks released by Anthropic.

As you consider integrating AI language models into your work, keep an eye on Claude 3. It’s already setting new standards and its future developments are expected to further influence how we interact with machines and develop software. Whether you’re a developer, a business owner, or just someone fascinated by AI, Claude 3 is a model to watch. To learn more about the latest AI models released by Anthropic in the form of Claude 3 jump over to the official press release for more details.

Filed Under: Gadgets News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to fine tune large language models (LLMs) with memories

How to fine tune LLMs with memories

If you would like to learn more about how to fine tune AI language models (LLMs) to improve their ability to memorize and recall information from a specific dataset. You might be interested to know that the AI fine tuning process involves creating a synthetic question and answer dataset from the original content, which is then used to train the model.

This approach is designed to overcome the limitations of language models that typically struggle with memorization due to the way they are trained on large, diverse datasets. To explain the process in more detail Trelis Research has created an interesting guide and overview on how you can find tune large language models for memorization.

Imagine you’re working with a language model, a type of artificial intelligence that processes and generates human-like text. You want it to remember and recall information better, right? Well, there’s a way to make that happen, and it’s called fine-tuning. This method tweaks the model to make it more efficient at holding onto details, which is especially useful for tasks that need precision.

Language models are smart, but they have a hard time keeping track of specific information. This problem, known as the “reversal curse,” happens because these models are trained on huge amounts of varied data, which can overwhelm their memory. To fix this, you need to teach the model to focus on what’s important.

Giving LLMs memory by fine tuning

One effective way to do this is by creating a custom dataset that’s designed to improve memory. You can take a document and turn it into a set of questions and answers. When you train your model with this kind of data, it gets better at remembering because it’s practicing with information that’s relevant to what you need.

Now, fine-tuning isn’t just about the data; it’s also about adjusting certain settings, known as hyperparameters. These include things like how much data the model sees at once (batch size), how quickly it learns (learning rate), and how many times it goes through the training data (epoch count). Tweaking these settings can make a big difference in how well your model remembers.

Here are some other articles you may find of interest on the subject of large language models and fine-tuning :

Fine tuning large language models

Choosing the right model to fine-tune is another crucial step. You want to start with a model that’s already performing well before you make any changes. This way, you’re more likely to see improvements after fine-tuning. For fine-tuning to work smoothly, you need some serious computing power. That’s where a Graphics Processing Unit (GPU) comes in. These devices are made for handling the intense calculations that come with training language models, so they’re perfect for the job.

Once you’ve fine-tuned your model, you need to check how well it’s doing. You do this by comparing its performance before and after you made the changes. This tells you whether your fine-tuning was successful and helps you understand what worked and what didn’t. Fine-tuning is a bit of an experiment. You’ll need to play around with different hyperparameters and try out various models to see what combination gives you the best results. It’s a process of trial and error, but it’s worth it when you find the right setup.

To really know if your fine-tuned model is up to par, you should compare it to some of the top models out there, like GPT-3.5 or GPT-4. This benchmarking shows you how your model stacks up and where it might need some more work.

So, if you’re looking to enhance a language model’s memory for your specific needs, fine-tuning is the way to go. With a specialized dataset, the right hyperparameter adjustments, a suitable model, and the power of a GPU, you can significantly improve your model’s ability to remember and recall information. And by evaluating its performance and benchmarking it against the best, you’ll be able to ensure that your language model is as sharp as it can be.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

ChatHub AI lets you run large language models (LLMs) side-by-side

ChatHub lets you access AI models side-by-side

If you are searching for a way to run AI models in the form of large language models (AI) side-by-side to see which provides the best results. You might be interested in a new application called ChatHub that allows you to talk to artificial intelligence (AI) as easily as chatting with a friend.  At the heart of ChatHub is its ability to connect you to several LLMs all in one place. Use ChatGPT, Bing Chat, Google Bard, Claude 2, Perplexity, and other open-source large language models as you need.

This means you don’t have to jump from one website to another to try out different AI models. You can see how up to six LLMs perform right next to each other, comparing their creativity, speed, and accuracy. This not only saves you time but also helps you get the best results by combining the strengths of each model.

ChatHub has been specifically designed to incorporate features that make your life easier when using AI, like the ability to quickly copy information, track your history, and search swiftly through past interactions. These aren’t just convenient; they give you more control over how you use AI, making your work more efficient. The development team responsible for creating the platform of also created a Chrome extension.

Using ChatHub to access different AI models

One of the coolest things about ChatHub is its prompt library. It’s full of prompts created by the community and a tool that helps you come up with your own. This is a huge help, whether you’re new to AI or you’ve been using it for a while. It guides you in asking the right questions to get the most useful answers from the AI.

Here are some other articles you may find of interest on the subject of AI models :

Easily switch between AI models

 

ChatHub is all about giving you choices. You can switch between popular LLMs depending on what you need at the moment. This flexibility means that the platform can adapt to a wide range of tasks, whether you’re writing a report, analyzing data, or just exploring what AI can do. For those who need even more customization, ChatHub has an API integration feature. This lets you add your own chat models using API keys. It opens up a world of possibilities for tasks that are specific to your needs or your business.

Some LLMs on ChatHub have special skills, like recognizing images or browsing the web. These abilities take what you can do with AI to a whole new level. You could analyze pictures or pull information from the internet, making ChatHub a versatile tool in your AI arsenal.

Now, it’s true that ChatHub might not have every single feature that some of its competitors offer. For example, OpenAI’s ChatGPT Plus has some functionalities that you won’t find on ChatHub. But what sets ChatHub apart is its pricing. You pay once to get a license, and you don’t have to worry about monthly subscriptions. Plus, they sometimes have discounts, which can make it a more affordable option.

So, if you’re looking to dive into the world of AI, or if you’re already swimming in it and need a better tool, ChatHub could be just what you need. It’s designed to make working with AI simpler and more effective, whether you’re using it for business, research, or personal projects. With its user-friendly interface and a wide range of features, ChatHub is ready to take your AI experience to the next level.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to install CrewAI and run AI models locally for free

How to install and run CrewAI for free locally

If you have been hit with large costs when using OpenAI’s API or similar you might be interested to know how you can install and run CrewAI locally and for free. Imagine having the power of advanced artificial intelligence right at your fingertips, on your very own computer, without spending a dime on cloud services. This is now possible with the help of tools like Ollama, which allow you to manage and run large language models (LLMs) such as Llama 2 and Mistral. Whether you’re just starting out or you’re an experienced user, this guide will walk you through the process of setting up and using CrewAI with Ollama, making it a breeze to harness the capabilities of these sophisticated models.

Ollama acts as your personal assistant in deploying LLMs on your computer. It simplifies the task of handling these complex models, which usually require a lot of computing power. With Ollama, you can run models like Llama 2, which Meta developed and which needs a good amount of RAM to work well. You’ll also get to know Mistral, an LLM that might outperform Llama 2 in some tasks.

Installing CrewAI locally

To get started with CrewAI, a flexible platform for creating AI agents capable of complex tasks, you’ll need to install it on your machine. Begin by downloading the open-source code, which comes with everything you need for CrewAI to work, including scripts and model files.

Here are some other articles you may find of interest on the subject of Ollama and running a variety of artificial intelligent (AI) models locally on your home network or computers whether it be  Windows, Linux  or macOS.

Once CrewAI is installed, the next step is to set up your LLMs for the best performance. This means adjusting model files with parameters that fit your needs. You also have to set environment variables that help your LLMs communicate with the CrewAI agents. To activate your LLMs within CrewAI, you’ll run scripts that create new models that work with CrewAI. These scripts, which you got when you downloaded the source code, get your LLMs ready to do the tasks you’ve set for them.

When working with LLMs on your own computer, it’s important to know exactly what you want to achieve. You need to give clear instructions to make sure your AI agents do what you expect. Remember that local models might not have the same processing power or access to huge datasets that cloud-based models do.

To install and run Crew AI for free locally, follow a structured approach that leverages open-source tools and models, such as LLaMA 2 and Mistral, integrated with the Crew AI framework. This comprehensive guide is designed to be accessible for users of varying skill levels, guiding you through the process without the need for direct code snippets.

How to install AI models locally on your computer

Begin by ensuring you have a basic understanding of terminal or command line interface operations, as well as ensuring your computer meets the necessary hardware specifications, particularly in terms of RAM, to support the models you plan to use. Additionally, having Python installed on your system is a key requirement. Common issues might include ensuring your system has sufficient RAM and addressing any dependency conflicts that arise. If you encounter problems, reviewing the setup steps and verifying the configurations are correct can help resolve many common issues.

1: Setting Up Your Environment

The initial step involves preparing your working environment. This includes having Python and Git available on your computer. You’ll need to clone the Crew AI framework’s repository to your local machine, which provides you with the necessary files to get started, including example agents and tasks.

2: Downloading and Setting Up LLaMA 2 and Mistral

With your environment set up, the next step is to download the LLaMA 2 and Mistral models using a tool designed for managing large language models locally. This tool simplifies the process of downloading, installing, and running these models on your machine. Follow the tool’s instructions to get both LLaMA 2 and Mistral set up and ensure they are running correctly by performing test runs.

3: Integrating LLaMA 2 and Mistral with Crew AI

Once the models are running locally, the next task is to integrate them with the Crew AI framework. This typically involves adjusting Crew AI’s settings to point to the local instances of LLaMA 2 and Mistral, allowing the framework to utilize these models for processing data. After configuring, verify that Crew AI can communicate with the models by conducting a simple test.

4: Running Your First Crew AI Agent

With the models integrated, you’re ready to run your first Crew AI agent. Define what tasks and objectives you want your agents to achieve within the Crew AI framework. Then, initiate your agents, which will now leverage the local models for their operations. This process involves running the Crew AI framework and monitoring its performance and outputs.

5: Advanced Configuration

As you become more familiar with running Crew AI locally, you may explore advanced configurations, such as optimizing the system for better performance or developing custom agents tailored to specific tasks. This might involve adjusting the models used or fine-tuning the Crew AI framework to better suit your requirements.

By following this guide, you can set up and use CrewAI on your computer for free. This lets you build AI agents for complex tasks using powerful LLMs like Llama 2 and Mistral AI. While there are some limits to what local models can do, they offer a cost-effective and accessible way to explore what LLMs can offer. If you want to learn more, there are plenty of resources and tutorials available to deepen your understanding of these technologies.

By using Ollama to set up LLMs with CrewAI and understanding how to give detailed task instructions, you can dive into the world of local LLMs. Take this opportunity to start developing AI on your own, free from the need to rely on cloud-based services.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to build ChatGPT custom GPT AI models

How to build ChatGPT custom GPT AI models

If like me your using custom GPTs to help with your daily workload you might be interested in learning more about how to integrate external APIs  to expand the functionality of your OpenAI GPT. Imagine having the power to create an intelligent digital assistant that can converse, perform tasks, and even embody your personal style or brand.

This is now possible thanks to the recent launch of custom GPTs by OpenAI. The advanced technology of Generative Pre-trained Transformers, commonly known as GPT has made it simpler for individuals to make their own AI assistants without needing to write a single line of code. This guide will walk you through the process of building a GPT model tailored to your needs, whether you’re looking to create a virtual tutor, a fitness coach, or any other type of AI assistant.

To start your project, you’ll need to visit OpenAI’s GPT Builder by signing up for a ChatGPT Plus,  Teams Or Enterprise account. The ChatGPT GPT  construction area is divided into two main sections: ‘Create’ and ‘Configure.’ In the ‘Create’ section, you’ll define the purpose of your GPT. The possibilities are endless, and the choice is yours. Once you’ve decided on the role your GPT will play, you’ll move on to the ‘configure’ section. Here, you’ll adjust the features and behaviors of your GPT to align with your vision.

Customizing your GPT is made easy with an intuitive conversational interface. It’s like teaching a new colleague how to perform their job. You’ll specify the functions you want your GPT to have and how it should interact with users. During this stage, you can also give your GPT a name, write a description, and provide user instructions that reflect your identity or your brand’s image.

Building Custom GPTs with ChatGPT

A unique visual identity is important for any product, and your GPT is no exception. You have the option to create and upload a custom logo, which will help users recognize and remember your GPT. This visual touch can make your GPT stand out and be associated with your unique style. For your GPT to be effective, it needs to be knowledgeable. You can upload reference materials that are relevant to its purpose. These materials could be anything from technical documents to works of fiction, depending on what you want your GPT to do.

Here are some other articles you may find of interest on the subject of constructing custom GPT AI models for a wide variety of different applications and using a number of different methods.

User engagement is key to the success of your GPT. To keep users coming back, set a friendly tone and include prompts that make interactions feel more natural. This will encourage users to engage with your GPT regularly.

To give your GPT even more functionality, consider integrating external APIs. This will allow your GPT to do things like search the web, create images, or understand code. These capabilities can make your GPT more versatile and useful than standard models. User privacy should be a top priority. Before you launch your GPT, test it thoroughly and adjust the settings to protect user data. Building trust with your users is essential.

How to build custom GPTs

1: Plan Your Custom GPT

Conceptualization is crucial as it lays the foundation for your GPT. Start by identifying a gap in the current market or a specific need within a community or industry. Consider the following:

  • Market Research: Conduct thorough market research to understand existing solutions and identify what they lack. This could involve analyzing competitor products, reading user reviews, or engaging with potential users on forums and social media.
  • Unique Value Proposition (UVP): Define what makes your GPT different. This could be a unique dataset, a novel application of AI, or a specific problem-solving approach that’s not currently available.
  • User Personas: Create detailed user personas representing your target audience. What are their goals, challenges, and preferences? How does your GPT solve their problems or enhance their lives?
  • Feasibility Study: Assess the technical feasibility of your idea. Do you have access to the necessary data? Can the current GPT technology support your concept effectively?

2: Access the GPT Builder

The GPT Builder is your workspace for bringing the concept to life. Familiarize yourself with its interface and capabilities:

  • Tutorial and Documentation: Spend some time going through any available tutorials or documentation to understand the full capabilities of the GPT Builder.
  • Prototyping: Use the “Create” section for rapid prototyping. This allows you to experiment with different ideas and get immediate feedback on how your GPT might function.
  • Feedback Loop: Leverage the conversational interface to refine your idea. The immediate feedback can help you iterate quickly and refine your concept before moving on to detailed configuration.

3: Define the GPT’s Function and Audience

Clarity on function and audience is essential for designing interactions and content:

  • Use Cases: Detail specific use cases of how your GPT will be used. This helps in designing the flow of interaction and ensuring that the GPT meets user needs effectively.
  • Audience Engagement: Think about how you will engage your target audience. What platforms do they use? How can you make your GPT accessible and appealing to them?
  • Accessibility and Inclusivity: Consider how to make your GPT accessible to a wider audience, including those with disabilities. This could involve voice commands, screen reader compatibility, and multilingual support.

4: Customization and Branding

Customization and branding are key to standing out:

  • Brand Personality: Your GPT’s name, description, and logo should reflect its personality and how you want users to perceive it. Is it professional, friendly, quirky, or inspirational? The branding should align with this.
  • Visual Identity: Consider color schemes, typography, and imagery that align with your brand. These elements should be consistent across all user touchpoints, from the GPT interface to marketing materials.
  • Trademark Checks: Ensure that your GPT’s name and logo are unique and do not infringe on existing trademarks. This is crucial for legal protection and brand identity.

5: Upload Reference Material

The quality and relevance of your reference material directly impact the GPT’s effectiveness:

  • Curated Content: Ensure the documents or links you use as references are high-quality, relevant, and up-to-date. This might involve curating content from reputable sources or creating custom content that perfectly fits your GPT’s purpose.
  • Diverse Sources: To avoid biases and enhance the comprehensiveness of your GPT, include a diverse range of sources. This could mean using materials from different cultures, perspectives, and areas of expertise.
  • User Privacy and Data Security: When users interact with your GPT, especially in areas requiring personal data (like fitness advice), ensure you have measures in place to protect their privacy and secure their data.

6: Define Interaction Styles

The interaction style of your GPT significantly impacts user engagement and satisfaction. Consider the following:

  • Tone and Language: The tone should match your target audience and the purpose of your GPT. For a fitness GPT, a motivational and encouraging tone could be effective. For a professional tool, a straightforward and informative tone may be more appropriate.
  • Personalization: Implementing personalized responses based on user input or preferences can enhance the user experience. This could involve remembering user names or previous interactions to create a more conversational and engaging experience.
  • Cultural Sensitivity: Be mindful of cultural differences and ensure your GPT’s interactions are inclusive and respectful to all users. This may involve localizing content for different regions or avoiding language that could be culturally sensitive.

 7: Advanced Features and Custom Actions

Leveraging advanced features and custom actions can extend the capabilities of your GPT, making it more powerful and versatile:

  • Web Browsing: Enabling web browsing allows your GPT to pull in current information from the web, enriching its responses. However, consider the reliability of sources and potential privacy implications.
  • Image Generation: For GPTs related to creative tasks, enabling image generation can provide users with visual content, enhancing interaction. Ensure generated images are appropriate and respect copyright laws.
  • Custom Plugins and APIs: Integrating external APIs can extend the functionality of your GPT, allowing it to perform actions like booking appointments, sending notifications, or accessing specialized databases. Ensure secure and efficient use of APIs to maintain performance and user trust.

8: Testing and Refinement

Thorough testing is critical to ensure your GPT functions as intended and delivers a high-quality user experience:

  • Functional Testing: Verify all features work correctly across different devices and platforms. This includes testing custom actions, response accuracy, and performance under various conditions.
  • User Feedback: Conduct user testing sessions to gather feedback on usability, engagement, and usefulness. Real-user insights can highlight issues you might not have considered and suggest improvements.
  • Iterative Refinement: Use feedback and testing results to make iterative improvements. This might involve refining responses, tweaking the UI, or adding new functionalities based on user demand.

9: Deployment

Choosing the right deployment strategy can affect the reach and success of your GPT:

  • Privacy Settings: Decide whether your GPT will be public, link-shared, or private. Consider your audience and the purpose of your GPT when making this decision.
  • GPT Store: Publishing on the GPT Store can provide visibility and monetization opportunities. Ensure your listing clearly communicates the value and functionality of your GPT to attract users.
  • Marketing: Develop a marketing plan to promote your GPT. This could include social media marketing, content marketing, or partnerships with influencers in your target market.

10: Ongoing Improvement and Support

Continuous improvement and active support are key to maintaining and growing your GPT’s user base:

  • User Support: Provide clear channels for user support and feedback. This could be through a dedicated support email, a feedback form, or social media engagement.
  • Updates and Enhancements: Regularly update your GPT to improve performance, add new features, and address user feedback. Communicate these updates to your users to keep them engaged.
  • Monitoring Usage: Use analytics to monitor how users interact with your GPT. This data can inform decisions about future improvements and identify new opportunities for engagement.

By meticulously addressing each of these expanded steps, you can create a custom GPT that not only fulfills a unique niche but also offers a meaningful and engaging experience to its users. When you’re happy with your GPT, it’s time to share it with the world. You can save, share, and publish your GPT on the GPT store. This is a crucial step that will introduce your GPT to a global audience, where it can provide help, entertainment, or education.

Creating a custom GPT is now within reach for many, thanks to the no-code platform and the variety of customization options available. You can create a GPT that not only serves a specific function but also represents your unique vision. Whether you’re using it for work or for personal projects, your custom GPT is just a few steps away.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Qualcomm AI Hub with 75+ AI models showcased at MWC 2024

Qualcomm AI Hub MWC 2024

At the prestigious Mobile World Congress in Barcelona, Qualcomm Technologies, Inc. made a significant announcement that’s stirring excitement among mobile developers. The company introduced the Qualcomm AI Hub, a new platform that’s poised to make a substantial impact on artificial intelligence within the realm of mobile development. This platform is not just another addition to the tech landscape; it’s a comprehensive resource that offers a vast library of AI models, all designed to improve the way we interact with a wide range of devices.

The Qualcomm AI Hub is a goldmine for developers, especially those who work with Snapdragon and Qualcomm platforms. It features an impressive collection of more than 75 AI and generative AI models. These models are not just any models; they are pre-optimized to work seamlessly with the Qualcomm AI Engine. What does this mean for developers and users alike? It means faster inferencing and enhancements that are aware of the hardware’s capabilities, leading to unmatched AI performance right on the device. These models are not hidden away; they are accessible through the Qualcomm AI Hub, GitHub, and Hugging Face, and the company promises they will keep them up-to-date with ongoing support.

One of the most exciting aspects of this launch is Qualcomm’s partnership with Hugging Face. This collaboration is strategic and significant because it aims to make these AI models accessible to a broader range of mobile developers. By doing so, it’s set to change the game for edge AI applications, providing developers with the tools they need to incorporate advanced AI features into their applications.

At the event, Qualcomm didn’t just talk the talk; they walked the walk. They showcased their Large Language and Vision Assistant (LLaVA) and Low Rank Adaptation (LoRA) technologies. These are not just fancy acronyms; they are technologies at the cutting edge, designed to enhance the AI capabilities of devices, making them more efficient at handling complex AI tasks.

The practical side of AI was also on full display. Qualcomm demonstrated how AI-powered features are being integrated into a variety of commercial products. From smartphones to PCs, from cars to consumer IoT, and across connectivity and 5G infrastructure, the applications of AI were evident. These examples not only showed the real-world benefits of AI but also hinted at the exciting possibilities that lie ahead.

But Qualcomm’s vision extends beyond just user-facing products. They are also innovating behind the scenes with AI optimizations in their Snapdragon X80 Modem-RF System and Qualcomm FastConnect 7900. These systems are engineered to elevate the performance of the next generation of mobile devices, ensuring faster and more reliable connections.

The company is also at the forefront of developing AI-based solutions for network management. They are introducing innovations like a generative AI assistant for RAN engineers, an AI-based open RAN application, and a suite for managing the lifecycle of 5G network slices. These tools are designed to streamline network operations and offer smarter management capabilities for the ever-evolving 5G network.

The announcements made by Qualcomm at the Mobile World Congress in 2024 are a clear indication of the company’s focus on advancing AI technology. The Qualcomm AI Hub, with its extensive collection of optimized AI models and cutting-edge technologies, is empowering developers to create innovative applications. These applications are set to redefine what mobile devices are capable of and establish new benchmarks for the industry.

Qualcomm AI Hub: A Catalyst for Mobile AI Development

The Qualcomm AI Hub is a groundbreaking platform that represents a significant leap forward in the integration of artificial intelligence into mobile technology. This hub provides a centralized resource for developers, offering access to a comprehensive library of AI models. These models are specifically tailored to enhance the functionality of mobile devices, enabling them to perform complex AI tasks more efficiently. The AI Hub is not just a repository of models; it is a dynamic ecosystem that supports the Snapdragon and Qualcomm platforms, ensuring that developers have the tools they need to push the boundaries of what mobile devices can do.

The AI models available through the Qualcomm AI Hub are pre-optimized for the Qualcomm AI Engine, which translates to rapid inferencing capabilities. This optimization is crucial because it allows for the AI models to be executed directly on the device, rather than relying on cloud processing. This on-device processing capability leads to faster response times and improved performance, which is essential for applications that require real-time AI computations. The accessibility of these models through platforms like GitHub and Hugging Face is a strategic move by Qualcomm to democratize AI development, making it possible for a wider range of developers to innovate in the mobile space.

Strategic Partnerships and Cutting-Edge Technologies

Qualcomm’s collaboration with Hugging Face is a strategic partnership that aims to broaden the reach of AI technologies to a more diverse group of mobile developers. This partnership is a key element in the push for more advanced edge AI applications. Edge AI refers to the processing of AI algorithms directly on a device, rather than in the cloud. This approach has numerous benefits, including reduced latency, increased privacy, and the ability to function without an internet connection. By making AI models more accessible, Qualcomm and Hugging Face are enabling developers to incorporate sophisticated AI features into their applications, which can lead to more personalized and responsive user experiences.

At the Mobile World Congress, Qualcomm showcased their Large Language and Vision Assistant (LLaVA) and Low Rank Adaptation (LoRA) technologies. These technologies are at the forefront of AI innovation, designed to enhance the capabilities of devices in understanding and processing natural language and visual information. LLaVA and LoRA are not just theoretical concepts; they are practical solutions that improve the efficiency of AI tasks on devices, making them smarter and more capable of handling the demands of modern applications.

AI Integration in Commercial Products and Network Management

Qualcomm’s demonstration of AI-powered features in various commercial products highlighted the practical applications of AI in everyday technology. The integration of AI into smartphones, PCs, automobiles, consumer IoT devices, and 5G infrastructure showcases the versatility of AI and its potential to transform a wide array of industries. These real-world examples serve as a glimpse into the future, where AI is seamlessly woven into the fabric of our technological experiences, enhancing functionality and user interaction.

Behind the scenes, Qualcomm is also making strides in AI with optimizations in their Snapdragon X80 Modem-RF System and Qualcomm FastConnect 7900. These systems are engineered to boost the performance of next-generation mobile devices, ensuring that they can handle faster and more reliable connections. This is particularly important as we move into an era where 5G connectivity is becoming the standard, and the demand for high-speed, low-latency connections is growing.

In addition to user-facing innovations, Qualcomm is developing AI-based solutions for network management. They are introducing tools like a generative AI assistant for RAN engineers, an AI-based open RAN application, and a suite for managing the lifecycle of 5G network slices. These tools are designed to simplify network operations and provide more intelligent management capabilities, which are essential for the complex and dynamic nature of 5G networks.

The advancements announced by Qualcomm at the Mobile World Congress highlight the company’s dedication to pushing the envelope in AI technology. The Qualcomm AI Hub, with its extensive collection of optimized AI models and state-of-the-art technologies, is equipping developers with the resources they need to create groundbreaking applications. These applications are poised to redefine the capabilities of mobile devices and set new standards for the industry.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.