Categories
Entertainment

Meta rolls out an updated AI assistant, built with the long-awaited Llama 3

[ad_1]

Meta for its AI assistant platform, Meta AI, which has been built using the long-awaited open source Llama 3 large language model (LLM). The company says it’s “now the most intelligent AI assistant you can use for free.” As for use case scenarios, the company touts the ability to help users study for tests, plan dinners and schedule nights out. You know the drill. It’s an AI chatbot.

Meta AI, however, has expanded into just about every nook and cranny throughout the company’s entire portfolio, after a test run . It’s still available with Instagram, but now users can access it on Messenger, Facebook feeds and Whatsapp. The chatbot also has a dedicated web portal at, wait for it, . You don’t need a company login to use it this way, though it won’t generate images. Those recently-released also integrate with the bot, with Quest headset integration coming soon.

On the topic of image generation, Meta says it’s now much faster and will produce images as you type. It also handles custom animated GIFs, which is pretty cool. Hopefully, it can successfully generate images of different races of people. We found that it a couple of weeks back, as it seemed biased toward creating images of people of the same race, even when prompted otherwise.

Meta’s also expanding global availability along with this update, as Meta AI is coming to more than a dozen countries outside of the US. These include Australia, Canada, Ghana, Jamaica, Pakistan, Uganda and others. However, there’s one major caveat. It’s only in English, which doesn’t seem that useful to a global audience, but whatever.

As for safety and reliability, the company says Llama 3 has been trained on an expanded data set when compared to Llama 2. It also used synthetic data to create lengthy documents to train on and claims it excluded all data sources that are known to contain a “high volume of personal information about private individuals.” Meta says it conducted a series of evaluations to see how the chatbot would handle risk areas like conversations about weapons, cyber attacks and child exploitation, and adjusted as required. In our brief testing with the product, we’ve already run into hallucinations, as seen below.

Meta AI makes a mistake on a recipe. Meta AI makes a mistake on a recipe.

Engadget/Karissa Bell

AI has become one of Meta CEO , along with in a secluded Hawaiian compound, but the company’s still playing catch up to OpenAI and, to a lesser extent, Google. Meta’s Llama 2 never really wowed users, due to a limited feature set, so maybe this new version of the AI assistant will catch lightning in a bottle. At the very least, it should be able to draw lightning in a bottle, or more accurately, slightly tweak someone else’s drawing of lightning in a bottle.

[ad_2]

Source Article Link

Categories
Featured

AI chip built using ancient Samsung tech is claimed to be as fast as Nvidia A100 GPU — prototype is smaller and much more power efficient but is it just too good to be true?

[ad_1]

Scientists from the Korea Advanced Institute of Science and Technology (KAIST) have unveiled an AI chip that they claim can match the speed of Nvidia‘s A100 GPU but with a smaller size and significantly lower power consumption. The chip was developed using Samsung‘s 28-nanometer manufacturing process, a technology considered relatively old in the fast-moving world of semiconductors.

The team, led by Professor Yoo Hoi-jun at KAIST’s processing-in-memory research center, has developed what it says is the world’s first ‘Complementary-Transformer’ (C-Transformer) AI chip. This neuromorphic computing system mimics the structure and workings of the human brain, using a deep learning model often employed in visual data processing.

[ad_2]

Source Article Link

Categories
News

Ondsel ES an engineering suite built on FreeCAD

Ondsel ES an engineering suite built on FreeCAD

In the world of 3D CAD design, a new CAD solution is capturing the attention of professionals the form of Ondsel ES, built upon the sturdy foundations of the open-source FreeCAD platform. This suite of tools that is transforming the way designers collaborate and manage their workflows and is quickly becoming the preferred choice for those who value a balance between ease of use and advanced functionality. Offering an open-source CAD (Computer-Aided Design) tool that is designed to meet the needs of engineers and designers who are looking for a more collaborative and user-friendly experience.

At the heart of Ondsel ES is its integrated assembly workbench. This feature allows users to easily link parts together and select from a range of joint types. This simplifies the workflow and enables the creation of complex assemblies with a level of ease and accuracy that was not possible before. The ability to construct intricate designs without the usual hassle is a significant advantage for any engineer or designer.

Collaboration is key in the design world, and Ondsel ES understands this. The suite includes tools that allow for secure cloud-based sharing, storage, and modification of CAD models. With the Enzo Lens Digital Vault, team members can access the latest models and make real-time contributions, no matter where they are located. This ensures that everyone is working on the most current version of a project, which is crucial for maintaining consistency and quality.

Ondsel ES engineering suite open source CAD

One of the most appealing aspects of Ondsel ES is its commitment to open-source principles. The software is not only free to use, but it is also transparent. The file formats are open and documented, which means that users are not tied to a single vendor and have complete control over their work. This level of autonomy is highly valued in the engineering and design community.

Here are some other articles you may find of interest on the subject of open source technologies :

Ondsel ES has tackled the topological naming problem, which is a common issue in CAD software. This problem can lead to errors and instability in models over time. Ondsel ES’s solution enhances the stability of models, making the management and updating of designs much smoother. This reduces the occurrence of errors that can disrupt the design process.

User experience (UX) and user interface (UI) are critical components of any software, and Ondsel ES places a strong emphasis on both. The suite is continuously improved based on feedback from the user community. This ensures that the software remains intuitive and easy to navigate, allowing users to focus on their designs rather than on figuring out how to use the tool.

For those facing complex design challenges, Ondsel ES is equipped with advanced features in kinematics, multi-body dynamics, and CNC (Computer Numerical Control). These features have been developed by experts with a deep understanding of CAD, ensuring that the suite can handle the demands of sophisticated design tasks.

Ondsel ES also contributes back to the open-source community by backporting enhancements to the upstream FreeCAD project. This not only benefits the wider FreeCAD community but also promotes a culture of collaboration and collective progress within the open-source domain.

Ondsel ES invites engineers and designers to join their community, contribute to the project, and take advantage of the advanced capabilities of this new open-source CAD tool. Whether you are an experienced professional or new to the world of CAD, Ondsel ES offers a powerful, collaborative, and accessible design experience that can enhance your creative process.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

AEON UP Xtreme 7100 mini PC built for robotic applications

AEON UP Xtreme 7100 mini PC

The world of robotics is constantly evolving, and with the introduction of the AAEON UP Xtreme 7100 robotics mini PC, we are witnessing a significant leap forward in the capabilities of robotic computing. AAEON, a renowned developer of advanced industrial and embedded computing platforms, has unveiled this new Mini PC that is set to make a substantial impact on the robotics industry. The UP Xtreme 7100 is a compact, yet powerful computing solution that is ideal for a range of robotic applications, including Automated Guided Vehicles (AGV), AGVs with AI, and Autonomous Mobile Robots (AMR).

At the core of the UP Xtreme 7100 robotics mini PC are the Intel Core i3-N305 and Intel Processor N97 CPUs. These processors are chosen for their ability to deliver a perfect balance between energy efficiency and processing power. This is crucial for robotics applications where maintaining high performance without consuming excessive power is a must. The UP Xtreme 7100’s design is notably compact, which is a significant advantage when it comes to integrating the system into the tight confines of AGVs and AMRs. The board itself measures just 120.35 mm by 122.5 mm, and the Mini PC version has dimensions of 152 mm by 124 mm by 40 mm, showcasing its space-efficient design.

AEON UP Xtreme 7100

AEON UP Xtreme 7100 internal hardware

Here are some other articles you may find of interest on the subject of robotics :

Connectivity is a breeze with the UP Xtreme 7100, thanks to its wide array of I/O options. It includes terminal blocks for serial communication, a 30-pin connector for digital I/O and isolated RS-232/422/485, as well as several high-speed I/O ports. These ports include two RJ-45 ports, four USB Type-A ports, and one USB Type-C port that also supports DisplayPort 1.4a. For display output, there’s an eDP 1.3 connector. The device also facilitates easy integration with CANBus networks, which are essential for industrial and automotive applications, through its onboard CAN 2.0B, DIP switch, and LED indicators.

Durability is a key aspect of the UP Xtreme 7100 robotics mini PC, as it is built to withstand the rigors of industrial environments. It features a wide power input range and is designed to resist surges, vibrations, and shocks. The I/O ports are lockable, ensuring reliable performance even in challenging conditions. For those who require even more protection, there’s an optional shock absorber kit that can be added to the UP Xtreme 7100 Edge system-level solution, safeguarding the device from impacts and vibrations.

All-in-one robotics mini PC

Storage is another area where the UP Xtreme 7100 excels. It offers a variety of storage options, including up to 64 GB of eMMC, 6 Gb/s SATA, and an M.2 2280 M-Key slot. The device is also compatible with the Hailo-8 M.2 2280 AI module, which can significantly enhance its AI inferencing capabilities. To ensure that the UP Xtreme 7100 remains relevant in the future, it supports M.2 2230 E-Key and M.2 3052 B-Key for Wi-Fi and 5G connectivity, allowing users to keep their robotics systems up-to-date with the latest advancements in technology.

  • Intel Processor N-series, and Intel Core i3-N305 Processor
  • Low power consumption
  • 2.5GbE x 2 (Intel I226-IT)
  • 2-channel CAN 2.0B x 1
  • Watchdog timer, Onboard TPM 2.0
  • DIO/GPIO via Terminal Block
  • Cable-free design
  • Wide 9V~36V power input
  • Fanless design

The AAEON UP Xtreme 7100 robotics mini PC solution is a robust, versatile, and space-saving computing solution that is designed to meet the demanding needs of modern robotics. With its powerful Intel CPUs, extensive connectivity options, and a sturdy build, it is well-equipped to advance the field of robotics technology. Whether it’s for AGVs, AMRs, or other robotic applications, the UP Xtreme 7100 is ready to take on the challenges of today’s and tomorrow’s computing demands.

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Real Gemini demo built using GPT4 Vision, Whisper and TTS

Real Gemini demo built using GPT4V and Whisper and TTS

If like me you were a little disappointed to learn that the Google Gemini demonstration released earlier this month was more about clever editing rather than technology advancements. You will be pleased to know that perhaps we won’t have to wait too long before something similar is available to use.

After seeing the Google Gemini demonstration  and the revelation from the blog post revealing its secrets. Julien De Luca asked himself “Could the ‘gemini’ experience showcased by Google be more than just a scripted demo?” He then went about creating a fun experiment to explore the feasibility of real-time AI interactions similar to those portrayed in the Gemini demonstration.  Here are a few restrictions he put on the project to keep it in line with Google’s original demonstration.

  • It must happen in real time
  • User must be able to stream a video
  • User must be able to talk to the assistant without interacting with the UI
  • The assistant must use the video input to reason about user’s questions
  • The assistant must respond by talking

Due to the current ability of Chat GPT  Vision to only accept individual images De Luca needed to upload a series of images and screenshots taken from the video at regular intervals for the GPT to understand what was happening. 

“KABOOM ! We now have a single image representing a video stream. Now we’re talking. I needed to fine tune the system prompt a lot to make it “understand” this was from a video. Otherwise it kept mentioning “patterns”, “strips” or “grid”. I also insisted on the temporality of the images, so it would reason using the sequence of images. It definitely could be improved, but for this experiment it works well enough” explains De Luca. To learn more about this process jump over to the Crafters.ai website or GitHub for more details.

Real Google Gemini demo created

AI Jason has also created a example combining GPT-4, Whisper, and Text-to-Speech (TTS) technologies. Check out the video below for a demonstration and to learn more about creating one yourself using different AI technologies combined together.

Here are some other articles you may find of interest on the subject of  ChatGPT Vision :

To create a demo that emulates the original Gemini with the integration of GPT-4V, Whisper, and TTS, developers embark on a complex technical journey. This process begins with setting up a Next.js project, which serves as the foundation for incorporating features such as video recording, audio transcription, and image grid generation. The implementation of API calls to OpenAI is crucial, as it allows the AI to engage in conversation with users, answer their inquiries, and provide real-time responses.

The design of the user experience is at the heart of the demo, with a focus on creating an intuitive interface that facilitates natural interactions with the AI, akin to having a conversation with another human being. This includes the AI’s ability to understand and respond to visual cues in an appropriate manner.

The reconstruction of the Gemini demo with GPT-4V, Whisper, and Text-To-Speech is a clear indication of the progress being made towards a future where AI can comprehend and interact with us through multiple senses. This development promises to deliver a more natural and immersive experience. The continued contributions and ideas from the AI community will be crucial in shaping the future of multimodal applications.

Image Credit : Julien De Luca

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

65 ExaFLOP AI Supercomputer being built by AWS and NVIDIA

65 ExaFLOP AI Supercomputer being built by AWS and NVIDIA

As the artificial intelligence explosion continues the demand for more advanced artificial intelligence (AI) infrastructure continues to grow. In response to this need, Amazon Web Services (AWS) and NVIDIA have expanded their strategic collaboration to provide enhanced AI infrastructure and services by building a new powerful AI Supercomputer capable of providing 65 ExaFLOPs  of processing power.

This partnership aims to integrate the latest technologies from both companies to drive AI innovation to new heights. One of the key aspects of this collaboration is AWS becoming the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips. These superchips come equipped with multi-node NVLink technology, a significant step forward in AI computing. The GH200 Grace Hopper Superchips present up to 20 TB of shared memory, a feature that can power terabyte-scale workloads, a capability that was previously unattainable in the cloud.

New AI Supercomputer under construction

In addition to hardware advancements, the partnership extends to cloud services. NVIDIA and AWS are set to host NVIDIA DGX Cloud, NVIDIA’s AI-training-as-a-service platform, on AWS. This service will feature the GH200 NVL32, providing developers with the largest shared memory in a single instance. This collaboration will allow developers to access multi-node supercomputing for training complex AI models swiftly, thereby streamlining the AI development process.

65 ExaFLOP of processing power

The partnership between AWS and NVIDIA also extends to the ambitious Project Ceiba. This project aims to design the world’s fastest GPU-powered AI supercomputer. AWS will host this supercomputer, which will primarily serve NVIDIA’s research and development team. The integration of the Project Ceiba supercomputer with AWS services will provide NVIDIA with a comprehensive set of AWS capabilities for research and development, potentially leading to significant advancements in AI technology. Here are some other articles you may find of interest on the subject of AI supercomputers :

Summary of collaboration

  • AWS will be the first cloud provider to bring NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology to the cloud. The NVIDIA GH200 NVL32 multi-node platform connects 32 Grace Hopper Superchips with NVIDIA NVLink and NVSwitch technologies into one instance. The platform will be available on Amazon Elastic Compute Cloud (Amazon EC2) instances connected with Amazon’s powerful networking (EFA), supported by advanced virtualization (AWS Nitro System), and hyper-scale clustering (Amazon EC2 UltraClusters), enabling joint customers to scale to thousands of GH200 Superchips.
  • NVIDIA and AWS will collaborate to host NVIDIA DGX Cloud—NVIDIA’s AI-training-as-a-service—on AWS. It will be the first DGX Cloud featuring GH200 NVL32, providing developers the largest shared memory in a single instance. DGX Cloud on AWS will accelerate training of cutting-edge generative AI and large language models that can reach beyond 1 trillion parameters.
  • NVIDIA and AWS are partnering on Project Ceiba to design the world’s fastest GPU-powered AI supercomputer—an at-scale system with GH200 NVL32 and Amazon EFA interconnect hosted by AWS for NVIDIA’s own research and development team. This first-of-its-kind supercomputer—featuring 16,384 NVIDIA GH200 Superchips and capable of processing 65 exaflops of AI—will be used by NVIDIA to propel its next wave of generative AI innovation.
  • AWS will introduce three additional new Amazon EC2 instances: P5e instances, powered by NVIDIA H200 Tensor Core GPUs, for large-scale and cutting-edge generative AI and HPC workloads, and G6 and G6e instances, powered by NVIDIA L4 GPUs and NVIDIA L40S GPUs, respectively, for a wide set of applications such as AI fine-tuning, inference, graphics and video workloads. G6e instances are particularly suitable for developing 3D workflows, digital twins and other applications using NVIDIA Omniverse, a platform for connecting and building generative AI-enabled 3D applications.
  • “AWS and NVIDIA have collaborated for more than 13 years, beginning with the world’s first GPU cloud instance. Today, we offer the widest range of NVIDIA GPU solutions for workloads including graphics, gaming, high performance computing, machine learning, and now, generative AI,” said Adam Selipsky, CEO at AWS. “We continue to innovate with NVIDIA to make AWS the best place to run GPUs, combining next-gen NVIDIA Grace Hopper Superchips with AWS’s EFA powerful networking, EC2 UltraClusters’ hyper-scale clustering, and Nitro’s advanced virtualization capabilities.”

Amazon NVIDIA partner

To further bolster its AI offerings, AWS is set to introduce three new Amazon EC2 instances powered by NVIDIA GPUs. These include the P5e instances, powered by NVIDIA H200 Tensor Core GPUs, and the G6 and G6e instances, powered by NVIDIA L4 GPUs and NVIDIA L40S GPUs, respectively. These new instances will enable customers to build, train, and deploy their cutting-edge models on AWS, thereby expanding the possibilities for AI development.

AWS NVIDIA DGX Cloud hosting

Furthermore, AWS will host the NVIDIA DGX Cloud powered by the GH200 NVL32 NVLink infrastructure. This service will provide enterprises with fast access to multi-node supercomputing capabilities, enabling them to train complex AI models efficiently.

To boost generative AI development, NVIDIA has announced software on AWS, including the NVIDIA NeMo Retriever microservice and NVIDIA BioNeMo. These tools will provide developers with the resources they need to explore new frontiers in AI development.

The expanded collaboration between AWS and NVIDIA represents a significant step forward in AI innovation. By integrating their respective technologies, these companies are set to provide advanced infrastructure, software, and services for generative AI innovations. The partnership will not only enhance the capabilities of AI developers but also pave the way for new advancements in AI technology. As the collaboration continues to evolve, the possibilities for AI development could reach unprecedented levels.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Polestar 4 to be built in South Korea

Polestar 4

Polestar has announced that it is expanding its production to South Korea with the new Polestar 4 in 2025 and the car will be built in Bushan in South Korea, production is expected to start in the second half of 2025.

Located with direct access to exporting ports, the Busan plant has 23 years of experience in vehicle manufacturing and approximately 2,000 employees. The Busan plant aims to reduce its CO2 emissions by 50% by 2030, and to become carbon neutral by 2040, through a combination of energy efficiency improvements and the adoption of renewable energy sources.

Polestar’s asset-light approach to development and manufacturing enables it to benefit from the competence, flexibility and scalability of its partners and major shareholders, without needing to invest in its own facilities.

Thomas Ingenlath, Polestar CEO, says: “We’re very happy to take the next step in diversifying our manufacturing footprint together with Geely Holding and Renault Korea Motors, a company that shares our focus on quality and sustainability. With Polestar 3 on-track to start production in Chengdu, China in early 2024 and in South Carolina, USA, in the summer of 2024, we will soon have manufacturing operations in five factories, across three countries, supporting our global growth ambitions.”

You can find out more details about the new Polestar 4 electric vehicle over at the Polestar website at the link below, the company has a wide range of new electric vehicles launching over the next few years.

Source Polestar

Filed Under: Gadgets News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Volvo EX30 EV SUV to be built in Belgium

Volvo EX30

Volvo recently unveiled their new Volvo EX30 EV SUV and the the car maker has revealed that this new EV will be built in Belgium in 2025, which will expand production of the car in Europe.

The production of this new EV previously started in Zhangjiakou, China and the first cards will be delivered to customers before the end of 2023, Volvo is expecting this to be a popular model.

The EX30, staying true to Volvo’s legacy, doesn’t compromise on safety. Designed to navigate the bustling urban jungle, it ensures the well-being of its occupants and those around. A standout feature is its proactive approach towards safeguarding cyclists and pedestrians.

The innovative system diligently monitors for potential ‘dooring’ incidents, alerting you if you’re on the verge of opening your door in the path of an oncoming cyclist, scooterist, or jogger. This, coupled with cutting-edge protective safety technology, reinforces the EX30’s commitment to uphold Volvo’s gold standard in safety.

Furthermore, the myriad of configurations available caters to the diverse tastes and requirements of potential owners. Whether you prioritize luxury, performance, or eco-friendliness, the EX30 has an iteration that resonates. Essentially, it stands as a testament to Volvo’s commitment to cater to individuals from all walks of life, each with their distinct preferences and lifestyles.

You can find out more details about the new Volvo EX30 EV SUV over at the Volvo website at the link below, Volvoi is building its XC40 and XC60 SUV models in Europe and China.

Source Volvo

Filed Under: Auto News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How Perplexity AI was built in just six months

How Perplexity AI was built

At the Ray Summit 2023, a gathering of software engineers, machine learning practitioners, data scientists, developers, MLOps professionals, and architects, Aravind Srinivas, the founder and CEO of Perplexity AI, shared the journey of building the first of its kind LLM-powered answer engine in just six months with less than $4 million. The summit, known for its focus on building and deploying large-scale applications, especially in AI and machine learning, provided the perfect platform for Srinivas to delve into the engineering challenges, resource constraints, and future opportunities of Perplexity AI.

Perplexity AI, a revolutionary research assistant, has carved a niche for itself by providing accurate and useful answers backed by facts and references. It has a conversational interface, contextual awareness, and personalization capabilities, making it a unique tool for online information search. The goal of Perplexity AI is to make the search experience feel like having a knowledgeable assistant who understands your interests and preferences and can explain things in a way that resonates with you.

How Perplexity AI was developed

The workflow of Perplexity AI allows users to ask questions in natural, everyday language, and the AI strives to understand the intent behind the query. It may engage in a back-and-forth conversation to clarify the user’s needs. The advanced answer engine processes the questions and tasks, taking into account the entire conversation history for context. It then uses predictive text capabilities to generate useful responses, choosing the best one from multiple sources, and summarizes the results in a concise way.

 Previous articles we have written that you might be interested in on the subject Perplexity AI:

Perplexity AI is not just a search engine that provides direct answers to user queries; it is much more than that. Initially, the company focused on text to SQL and enterprise search, with backing from prominent investors such as Elon Musk, Nat Friedman, and Jeff Dean. In November, it launched a web search for friends and Discord Bots, followed by the launch of Perplexity itself a week later.

Since then, the company has been relentlessly working on improving its search capabilities, including the ability to answer complex queries that traditional search engines like Google cannot. It has also launched a ‘research assistant’ feature that can answer questions based on uploaded files and documents. To enhance user experience, Perplexity has introduced ‘collections’, a feature that allows users to save and organize their searches.

In terms of technology, Perplexity has started serving its own models, including LLMs, and has launched a fine-tuned model that combines the speed of GPT-3.5 with the capabilities of GPT-4. It is also exploring the use of open-source models and has its own custom inference stack to improve search speed.

Earlier this month Perplexity announced pplx-api, designed to be one of the fastest ways to access Mistral 7B, Llama2 13B, Code Llama 34B, Llama2 70B, replit-code-v1.5-3b models. pplx-api makes it easy for developers to integrate cutting-edge open-source LLMs into their projects.

  • Ease of use: developers can use state-of-the-art open-source models off-the-shelf and get started within minutes with a familiar REST API.

  • Blazing fast inference: our thoughtfully designed inference system is efficient and achieves up to 2.9x lower latency than Replicate and 3.1x lower latency than Anyscale.

  • Battle tested infrastructure: pplx-api is proven to be reliable, serving production-level traffic in both its Perplexity answer engine and the Labs playground.

  • One-stop shop for open-source LLMs: the team at Perplexity says it is dedicated to adding new open-source models as they arrive. For example, the team added Llama and Mistral models within a few hours of launch without pre-release access.

Looking ahead, Perplexity’s future plans include further improvements to its search capabilities and the development of its own models to maintain control over pricing and customization. The journey of Perplexity AI, as shared by Aravind Srinivas at the Ray Summit 2023, is a testament to the power of in

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.