TikTok’s parent company ByteDance has reportedly quietly invested in Xinyuan Semiconductors, a Chinese memory chip manufacturer.
A report from Pandaily, a tech media site based in Beijing, the move reportedly positions ByteDance as the third-largest shareholder in the chip maker, holding an indirect stake of 9.5%.
A ByteDance spokesperson confirmed this previously undisclosed investment to Pandaily, stating its aim is to hasten the development of VR headsets. This move aligns with ByteDance’s growing interest in the VR sector, as it plans to take on Meta’s Quest and Apple‘s Vision Pro.
Pushing ahead into VR
Based in Shanghai and established in 2019, Xinyuan Semiconductors specializes in Resistive Random Access Memory (ReRAM) technology and related chip products. The company’s portfolio covers three major application areas: high-performance industrial control and automotive SoC and ASIC chips, Computing in Memory (CIM) IP and chips, and System-on-Memory (SoM) chips.
This investment in Xinyuan Semiconductors isn’t ByteDance’s first venture into the semiconductor industry. In 2021, the tech behemoth also invested in Moore Thread, a Chinese GPU manufacturer.
The company’s strategic investments signal a clear intent to compete in the VR space. TikTok is already available as a native app for Vision Pro.
But while this latest investment could potentially be setting the stage for a showdown with Apple and its Vision Pro headset, ByteDance has another far bigger battle on its hands right now.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The US House of Representatives recently passed a significant bill that could lead to a TikTok ban in America if the Chinese parent company fails to sell its controlling stake of the social media app within the next six months.
Samsung kicked off the month of March with new challenges for Samsung Health and PENUP users. As usual, the new Samsung Health challenge is a month-long, and PENUP artists can participate in a two-week contest before the theme changes in mid-March.
Here are the current live challenges. For Samsung Health, the March theme is “Jungle.” Samsung asks participants if they’ve become too dependent on high-tech devices and invites them to “Imagine getting lost in a jungle where you’re totally unplugged from technology. Things may be uncomfortable, but think of it as an adventure.”
As for the PENUP challenge, it began on March 1 and will end on March 15. The theme is “Let’s Draw School,” and as usual, Samsung wrote a fairly lengthy introduction to the challenge to set the mood and inspire digital painters.
There’s always room for interpretation with these PENUP challenges, but Samsung encourages PENUP users to “Take on this month’s challenge by drawing schools you attended in the past, the memories you have from school, or schools from our future.”
If you want to participate in these March challenges, open Samsung Health on your Galaxy phone, access the “Together” tab, and accept the Jungle challenge.
As for PENUP, you can access the “Challenges” tab from the app’s main screen, tap the “Let’s Draw School” banner at the top, and tap the cup-shaped button in the lower right corner. Before you start drawing, as usual, you can also view submissions from other PENUP users and read the challenge’s description by tapping “Introduction.”
At the Ray Summit 2023, a gathering of software engineers, machine learning practitioners, data scientists, developers, MLOps professionals, and architects, Aravind Srinivas, the founder and CEO of Perplexity AI, shared the journey of building the first of its kind LLM-powered answer engine in just six months with less than $4 million. The summit, known for its focus on building and deploying large-scale applications, especially in AI and machine learning, provided the perfect platform for Srinivas to delve into the engineering challenges, resource constraints, and future opportunities of Perplexity AI.
Perplexity AI, a revolutionary research assistant, has carved a niche for itself by providing accurate and useful answers backed by facts and references. It has a conversational interface, contextual awareness, and personalization capabilities, making it a unique tool for online information search. The goal of Perplexity AI is to make the search experience feel like having a knowledgeable assistant who understands your interests and preferences and can explain things in a way that resonates with you.
How Perplexity AI was developed
The workflow of Perplexity AI allows users to ask questions in natural, everyday language, and the AI strives to understand the intent behind the query. It may engage in a back-and-forth conversation to clarify the user’s needs. The advanced answer engine processes the questions and tasks, taking into account the entire conversation history for context. It then uses predictive text capabilities to generate useful responses, choosing the best one from multiple sources, and summarizes the results in a concise way.
Previous articles we have written that you might be interested in on the subject Perplexity AI:
Perplexity AI is not just a search engine that provides direct answers to user queries; it is much more than that. Initially, the company focused on text to SQL and enterprise search, with backing from prominent investors such as Elon Musk, Nat Friedman, and Jeff Dean. In November, it launched a web search for friends and Discord Bots, followed by the launch of Perplexity itself a week later.
Since then, the company has been relentlessly working on improving its search capabilities, including the ability to answer complex queries that traditional search engines like Google cannot. It has also launched a ‘research assistant’ feature that can answer questions based on uploaded files and documents. To enhance user experience, Perplexity has introduced ‘collections’, a feature that allows users to save and organize their searches.
In terms of technology, Perplexity has started serving its own models, including LLMs, and has launched a fine-tuned model that combines the speed of GPT-3.5 with the capabilities of GPT-4. It is also exploring the use of open-source models and has its own custom inference stack to improve search speed.
Earlier this month Perplexity announced pplx-api, designed to be one of the fastest ways to access Mistral 7B, Llama2 13B, Code Llama 34B, Llama2 70B, replit-code-v1.5-3b models. pplx-api makes it easy for developers to integrate cutting-edge open-source LLMs into their projects.
Ease of use: developers can use state-of-the-art open-source models off-the-shelf and get started within minutes with a familiar REST API.
Blazing fast inference: our thoughtfully designed inference system is efficient and achieves up to 2.9x lower latency than Replicate and 3.1x lower latency than Anyscale.
Battle tested infrastructure: pplx-api is proven to be reliable, serving production-level traffic in both its Perplexity answer engine and the Labs playground.
One-stop shop for open-source LLMs: the team at Perplexity says it is dedicated to adding new open-source models as they arrive. For example, the team added Llama and Mistral models within a few hours of launch without pre-release access.
Looking ahead, Perplexity’s future plans include further improvements to its search capabilities and the development of its own models to maintain control over pricing and customization. The journey of Perplexity AI, as shared by Aravind Srinivas at the Ray Summit 2023, is a testament to the power of in
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.