Apple acquired Canada-based company DarwinAI earlier this year to build out its AI team, reports Bloomberg. DarwinAI created AI technology for inspecting components during the manufacturing process, and it also had a focus on making smaller and more efficient AI systems.
DarwinAI’s website and social media accounts have been taken offline following Apple’s purchase. Dozens of former DarwinAI companies have now joined Apple’s artificial intelligence division. AI researcher Alexander Wong, who helped build DarwinAI, is now a director in Apple’s AI group.
Apple confirmed the acquisition with the statement that it typically gives when questioned about purchases. “Apple buys smaller technology companies from time to time” but does not discuss its purpose or plans.
In an effort to catch up with Microsoft, Google, and others in the AI market, Apple is working hard to build artificial intelligence features for its next-generation iOS 18 and macOS 15 operating systems.
If Apple wants to be able to rival Microsoft’s Bing, OpenAI’s ChatGPT, and other generative AI offerings, it will need to integrate generative AI into a range of products. Apple is testing large language models, and AI features are said to be coming to Siri, Shortcuts, Messages, Apple Music, and more.
Apple is aiming to have AI features run on-device for privacy reasons, and DarwinAI’s efforts to make smaller AI systems could be of use to further that endeavor.
Apple CEO Tim Cookhas promised that Apple will “break new ground” in generative AI in 2024. “We believe it will unlock transformative opportunities for our users,” said Cook.
Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.
It’s a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.
Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.
Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.
Image 1 of 3
I guess I don’t know how to describe myself.(Image credit: Future)
(Image credit: Future)
Things are getting weird(Image credit: Future)
In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney’s four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.
Later, I described a character with Charles Schulz’s Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn’t or wouldn’t put the kite in the tree branches).
Image 1 of 2
An homage to Charles Schulz(Image credit: Future)
(Image credit: Future)
It’s far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you’ll get in subsequent images.
Perhaps the most startling thing about Midjourney’s update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it’s just a simple prompt, one code, and an image reference.
Image 1 of 2
Got a lot closer with my photo as a reference(Image credit: Future)
(Image credit: Future)
While it’s easier to take one of Midjourney’s own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.
Midjourney quickly spit out an interpretation of me making a pizza. At best, it’s the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.
Oh, hey, Not Tim Cook (Image credit: Future)
Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he’s holding an iPad that looks like it has pizza for a screen.
When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.
Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)
Midjourney’s AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.
Such a tool could speed storyboarding but also make character animators very nervous.
If it’s any consolation, I’m not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.
New generative AI features are expected to be a highlight of iOS 18, and a skilled artist created a concept video that presents an early look at how they might work.
The video also shows other suggested upgrades, like bringing Split View — Apple’s multitasking feature that lets two apps appear side by side on iPad (and Mac) — to the iPhone.
iOS 18 concept video shows the potential of generative AI
We don’t need to depend on leaks or rumors to learn that Apple has big plans for artificial intelligence — the head of the company flat-out said so. “I think there’s a huge opportunity for Apple with GenAI and AI,” CEO Tim Cook said in February. “We’re excited to share the details of our ongoing work in that space later this year.”
On the assumption that the iPhone will be the main beneficiary of these new capabilities, Kevin Kall, aka the Hacker 34, dreamed up multiple ways to integrate AI into the operating system. Then Kall created an iOS 18 concept video to show them off.
“Engage with Siri like never before. Now integrated into every app, Siri becomes your on-demand assistant for generating text and images, making your interactions seamless and more intuitive,” says Kall’s description of his creation. “Introducing a novel way to edit your photos — using just your voice. Command your iPhone to adjust, filter, or crop your photos, offering a hands-free approach to perfecting your images.”
Watch the iOS 18 concept video:
Over the years, the artist built a collection of concepts depicting future Apple devices. These always stay grounded in what’s really possible and don’t include tech that won’t be available for a decade or more.
Other possible new iPhone features
Generative AI features aren’t the only suggestions in Kall’s iOS 18 concept video. Others include carrying Split View over from iPadOS to iPhone. This would allow two applications to appear side by side, an arrangement that seems practical on a huge display like in the iPhone 15 Pro Max. The iPhone 16 lineup reportedly will bring even larger screens, among other improvements.
(Incidentally, iOS concept creators have been toying with the idea of adding Split View to the iPhone for years. We’ve seen concepts pitching the feature in iOS 13, iOS 14, iOS 15 and iOS 17.)
Kall’s other suggestions include making the iPhone Control Center more editable and adding custom lock screen buttons.
We’ll discover whether any of the features from the concept video make it into iOS 18 when Apple unveils it — almost certainly at WWDC24 in June.
This is the time of generative AI, a sophisticated branch of technology that is rapidly altering the landscape of content creation. It’s a field where the lines between human ingenuity and machine efficiency are blurring, giving rise to a new era of innovation. Generative AI is distinct from the AI most people are familiar with. Instead of merely processing information, it has the remarkable ability to produce new content that was once considered the sole province of human creativity. Imagine a tool that could offer you intelligent solutions on demand, much like having a digital genius at your fingertips. This is the essence of what generative AI brings to the table.
Generative AI refers to a subset of artificial intelligence technologies that can generate new content, such as text, images, music, and even code, based on the patterns and data they have learned from. Unlike traditional AI, which focuses on understanding or interpreting existing information, generative AI takes this a step further by creating original output that can mimic human-like creativity. The foundation of generative AI involves complex algorithms and models that learn from vast amounts of data, identifying underlying patterns, structures, and relationships within this data.
Generative AI explained in simple terms
The key to unlocking the full potential of generative AI lies in prompt engineering—the art of crafting the right instructions to guide the AI towards generating the desired outcome. As AI becomes more integrated into our everyday tasks, mastering this skill is becoming increasingly important. It ensures that the AI’s output aligns with our goals and expectations.
Here are some other articles you may find of interest on the subject of generative artificial intelligence :
Generative AI is a step above its predecessors in its ability to create. While traditional AI systems are adept at organizing and classifying existing data, generative AI can write essays, create music, or produce realistic images from a simple text description. This is made possible by Large Language Models (LLMs) like the Generative Pre-trained Transformer (GPT). These models are trained on vast amounts of data, enabling them to generate text that is not only coherent but also contextually relevant. They are powered by complex algorithms that allow them to improve their performance continuously.
The capabilities of generative AI are not limited to text. It can turn rough sketches into detailed, lifelike images, provide elaborate descriptions of visuals, convert speech to text, and even create spoken content or video clips from written descriptions. Multimodal AI products push these boundaries even further by blending different forms of media, thereby enriching the user experience and expanding the functionality of AI. Application Programming Interfaces (APIs) play a pivotal role in the integration of AI into various products. They act as the bridge that allows different software components to communicate with each other, making it possible for AI to become a seamless part of our digital tools.
Summary explanation of Generative AI
To understand generative AI, it’s crucial to grasp two key concepts: machine learning and neural networks. Machine learning is a method of teaching computers to learn from data, improve through experience, and make predictions or decisions. Neural networks, inspired by the human brain’s architecture, are a series of algorithms that recognize underlying relationships in a set of data through a process that mimics the way a human brain operates.
Generative AI operates primarily through two models: Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
Generative Adversarial Networks (GANs): GANs consist of two parts, a generator and a discriminator. The generator creates new data instances, while the discriminator evaluates them against real data. The generator’s goal is to produce data so authentic that the discriminator cannot distinguish it from real data. This process continues until the generator achieves a high level of proficiency. An example of GANs in action is the creation of realistic human faces that do not belong to any real person.
Variational Autoencoders (VAEs): VAEs are also used to generate data. They work by compressing data (encoding) into a smaller, dense representation and then reconstructing it (decoding) back into its original form. VAEs are particularly useful in generating complex data like images and music by learning the probability distribution of the input data.
Examples of Generative AI Applications:
Text Generation: Tools like OpenAI’s GPT (Generative Pre-trained Transformer) can produce coherent and contextually relevant text based on a given prompt. For instance, if you ask it to write a story about a lost kitten, GPT can generate a complete narrative that feels surprisingly human-like.
Image Creation: DeepArt and DALL·E are examples of AI that can generate art and images from textual descriptions. You could describe a scene, such as a sunset over a mountain range, and these tools can create a visual representation of that description.
Music Composition: AI like OpenAI’s Jukebox can generate new music in various styles by learning from a large dataset of songs. It can produce compositions in the style of specific artists or genres, even singing with generated lyrics.
Code Generation: GitHub’s Copilot uses AI to suggest code and functions to developers as they type, effectively generating coding content based on the context of the existing code and comments.
As we observe the swift progress of generative AI, it’s important to maintain a balanced perspective. We must embrace the possibilities that AI offers while acknowledging its current limitations. Human insight remains irreplaceable, providing the domain expertise and ethical guidance that AI is not equipped to handle.
Generative AI is reshaping the boundaries of what we consider achievable. It presents us with tools that enhance human productivity and creativity. By gaining an understanding of AI models, becoming proficient in prompt engineering, and preparing for the advent of more autonomous systems, we position ourselves not just as spectators but as active contributors to the unfolding future of technology.
Filed Under: Guides, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
If you would like to learn more about the field of generative artificial intelligence (AI) that is rapidly transforming how we interact with technology. This area of study, which focuses on the creation of new content such as text, images, and audio from existing data, is becoming increasingly relevant in our daily lives. Technologies like ChatGPT and DallE 3 are prime examples of how generative AI can innovate and automate tasks, showcasing its ability to influence our digital experiences.
Generative AI has been around for a while, subtly shaping the way we use technology. Early versions of AI, like Google Translate and Siri, have set the stage for more advanced systems such as GPT-4. These technologies have evolved from simple automated responses to generating complex, human-like text and realistic images, making them more and more a part of our everyday digital interactions.
At its core, generative AI works by mimicking the human brain through language modeling and neural networks. This allows the AI to learn from vast amounts of data on the web, recognizing patterns and associations that enable it to produce content that is both relevant and engaging. However, creating a generative AI model is just the first step. Fine-tuning these models is crucial to ensure that they can perform specific tasks accurately and reliably.
What is generative AI?
One of the most remarkable aspects of generative AI is its ability to improve itself through self-supervised learning. This means that the AI can analyze additional data, identify its own errors, and correct them without human intervention, much like how we learn from our own experiences.
Here are some other articles you may find of interest on the subject of generative AI
As AI models become larger and more complex, they can produce outputs that are increasingly nuanced and sophisticated. But scaling up these models comes with its own set of challenges, such as managing the computational demands and the potential for errors that can arise.
Generative AI is not without its flaws. Issues such as bias, misinformation, and the generation of irrelevant or nonsensical content—sometimes referred to as “hallucinations”—can lead to distorted outputs that may be unreliable or even harmful. Addressing these challenges is essential for the ethical use of AI.
The impact of generative AI extends beyond the technology itself. There are environmental considerations to take into account, as well as the potential effects on job markets. As AI becomes more prevalent in society, it’s important to ensure that its development aligns with societal values and ethical practices.
Looking ahead, the future of generative AI is likely to involve more efficient system architectures and the need for careful regulation. Despite its progress, AI still faces difficulties in understanding the physical world and human emotions, which highlights the importance of ongoing research and development.
The recent Turing Institute lecture stressed the importance of human involvement in guiding the evolution of AI. As AI continues to advance, it’s crucial to ensure that it serves beneficial purposes, reduces biases, and reflects societal values.
Generative AI is a powerful tool that has the potential to reshape various industries. Understanding its capabilities, limitations, and impact on society is key to harnessing its power responsibly. As we look to the future, it’s clear that generative AI will continue to play a significant role in how we interact with technology, and it’s up to us to steer its development in a direction that benefits everyone.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Google has announced that it is bringing Generative AI to Google Maps, this new feature is being released in the USA first and it will be available in early access to users of select Local Guides.
Let’s say you’re visiting San Francisco and want to plan a few hours of thrifting for unique vintage finds. Just ask Maps what you’re looking for, like “places with a vintage vibe in SF.” Our AI models will analyze Maps’ rich information about nearby businesses and places along with photos, ratings and reviews from the Maps community to give you trustworthy suggestions.
You’ll see results organized into helpful categories — like clothing stores, vinyl shops and flea markets — along with photo carousels and review summaries that highlight why a place might be interesting for you to visit.
Maybe you also want to grab a bite to eat somewhere that keeps those vintage vibes going. Continue the conversation with a follow-up question like “How about lunch?” Maps will suggest places that match the vintage vibe you’re looking for, like an old-school diner nearby. From there, you can save the places to a list to stay organized, share with friends or revisit in the future.
You can find out more details about the changes and features coming to Google Maps with Generative AI over at Google’s website at the link below, Google is expected to expand this feature further to Maos this year.
Source Google
Filed Under: Technology News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
The field of artificial intelligence (AI) is undergoing a significant transformation, and the Turing Institute is at the forefront of this exciting era. Named after the legendary AI pioneer Alan Turing, the institute has become a beacon of innovation, turning theoretical concepts into practical applications that are beginning to reshape our world.
Since the mid-2000s, AI has experienced a surge in growth, driven by breakthroughs in machine learning. The effectiveness of these systems is largely dependent on the quality of the training data they receive. This process, known as supervised learning, allows AI to learn from examples. One of the most critical developments in this area has been the creation of neural networks. These networks, inspired by the human brain, enable machines to process and interpret vast amounts of data.
The future of generative AI
Among the most notable advancements in AI is the creation of sophisticated language models, such as GPT-3. These models have the ability to generate text that is so similar to human writing that it can be difficult to distinguish between the two. The versatility of these models is remarkable, and they are being used in a variety of applications. However, they are not without their flaws. These AI systems can sometimes produce errors, demonstrate biases, and raise concerns about issues such as toxicity and compliance with laws like the General Data Protection Regulation (GDPR).
Despite the impressive capabilities of current AI systems, they still fall short in certain areas. For instance, AI does not yet fully understand context, nor does it possess consciousness or reasoning abilities. This distinction highlights the gap between what AI can do and the full spectrum of human intelligence, which encompasses more than just language skills and pattern recognition.
The pursuit of General AI, which aims to replicate the full range of human intellectual abilities, raises profound philosophical and ethical questions. As AI-generated content becomes more prevalent online, we must consider the responsibilities associated with this content and the potential impact of AI on society, including the feedback loops it may create.
To address some of these challenges, researchers are exploring new approaches that combine symbolic AI, which operates based on a set of rules, with the data-driven methods used by large AI systems. This combination is expected to yield more robust and capable AI technologies. Additionally, the development of multimodal AI, which can process and understand various types of data such as text, images, and videos, is set to expand the possibilities of what AI can achieve.
The Turing Institute is playing a critical role in pushing the boundaries of AI while also addressing the ethical considerations that accompany these technological advances. As AI continues to progress, the goal is not to replace human capabilities but to augment them, creating tools that enhance our abilities and contribute positively to society. The future of generative AI is not only about technological innovation but also about navigating the complex landscape of societal implications that come with it.
Image Credit: Turing Institute
Filed Under: Guides, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
To maintain a fair playing field in the rapidly growing generative AI industry, the Federal Trade Commission (FTC) has initiated a probe into the business dealings of some of the sector’s most influential players. Alphabet Inc., Amazon.com Inc., Anthropic PBC, Microsoft Corp., and OpenAI Inc. are now under the microscope as the FTC exercises its authority to demand extensive documentation on their operations, affiliations, and strategies.
This FTC AI inquiry is not a hunt for legal violations but rather a deep dive into the inner workings of these companies to ensure they are not stifling competition or innovation. The FTC, under the leadership of Chair Lina M. Khan, is using its powers to issue 6(b) orders, a tool that allows for a thorough examination of business practices without the pretext of an ongoing legal case. The Commission’s unanimous decision to deploy these orders is a clear signal of its intent to keep a watchful eye on the AI market, which is evolving at a breakneck pace and has the potential to become dominated by a few key players.
FTC seeking information specifically related to:
Information regarding a specific investment or partnership, including agreements and the strategic rationale of an investment/partnership.
The practical implications of a specific partnership or investment, including decisions around new product releases, governance or oversight rights, and the topic of regular meetings.
Analysis of the transactions’ competitive impact, including information related to market share, competition, competitors, markets, potential for sales growth, or expansion into product or geographic markets.
Competition for AI inputs and resources, including the competitive dynamics regarding key products and services needed for generative AI.
Information provided to any other government entity, including foreign government entities, in connection with any investigation, request for information, or other inquiry related to these topics.
At the heart of the FTC’s concerns are several high-profile collaborations that could reshape the competitive landscape of the AI industry. Microsoft’s financial backing of OpenAI, Amazon’s tie-up with Anthropic, and Google’s partnership with the same AI firm are all under scrutiny. The FTC is keen to understand the strategic motives behind these alliances and their potential to lock up the market, particularly when it comes to the resources needed to develop AI technologies.
FTC AI inquiry
The companies in question have been given 45 days to respond to the FTC’s orders. The information they provide will be crucial in helping the Commission to map out the current state of play and predict how these relationships might influence the future of AI. The FTC’s investigation is not just about keeping the market competitive today; it’s also about looking ahead and ensuring that the AI industry evolves in a way that benefits society as a whole.
The implications of the FTC’s inquiry are far-reaching. It’s not just about preventing a few companies from gaining too much power; it’s about making sure that the AI industry continues to be a hotbed of innovation. By examining these strategic partnerships, the FTC is also considering how to encourage a competitive environment that fosters new ideas and benefits consumers.
The FTC AI inquiry and proactive stance in investigating the connections between generative AI firms and major cloud service providers is a significant moment in the oversight of the AI industry. The issuance of 6(b) orders to these key players is a testament to the FTC’s dedication to promoting a healthy competitive environment and fostering innovation. The outcome of this investigation will likely have a lasting impact on how investments and partnerships are formed in the AI sector. As the companies involved prepare their responses, the industry and consumers alike are watching closely, eager to see what the FTC’s findings will reveal.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
Google has announced the release of three new generative AI features in Chrome (M121) aimed at enhancing user experience through machine learning and AI technologies. These features, which are experimental and available initially in the U.S. for Mac and Windows PC users, include tools for organizing tabs, creating custom browser themes, and assisting with writing on the web. However, they are not available for enterprise and educational accounts at this time says Google.
Google’s Chrome browser is taking a significant step forward with the introduction of three innovative features that aim to improve the way users interact with the internet. These new tools, which are now available to Chrome users in the United States on Mac or Windows PCs, utilize advanced artificial intelligence (AI) to make browsing more efficient, personalized, and helpful when it comes to writing.
Chrome browser AI features
For many of us, managing a multitude of open tabs can be overwhelming. Chrome’s new AI-driven Tab Organizer is here to tackle this problem. It learns from how you use the browser and then suggests how to group your tabs together. It even gives these groups helpful names and emojis. Imagine opening your browser to find all your tabs neatly categorized into groups like “Work,” “Shopping,” or “Research.” This feature is designed to turn a chaotic tab bar into a tidy, manageable workspace.
But Chrome’s enhancements aren’t just about practicality; they’re also about making your browser feel like it’s truly yours. With the Custom Themes Creation feature, you can tell the AI what kind of visual theme you’d like, such as “tranquil beach sunset” or “sleek cyberpunk city.” The AI then uses a text-to-image model to create a theme for your browser that fits your description. It’s like being able to change your desktop wallpaper, but for your browser, allowing you to express your personal style and preferences.
Google Gemini AI
Here are some other articles you may find of interest on the subject of Google Gemini :
Writing on the web is another area where Chrome is set to make a big difference. The upcoming Writing Assistance feature will help you with everything from drafting a review to composing a formal email. This AI tool will offer text suggestions and help you complete sentences, making sure your writing is clear and effective. It’s like having a virtual assistant that helps you communicate your ideas more efficiently.
These new features are part of Google’s broader effort to weave AI and machine learning (ML) into the Chrome experience. The goal is to make web browsing more intuitive and tailored to each user. While Chrome already allows for some level of customization with personal photos or themes from the Chrome Web Store, these AI-driven features take customization to a whole new level. Later this year, Google plans to introduce the Gemini AI model into Chrome, which is expected to further refine the browsing experience.
The introduction of these AI features marks a significant enhancement in how users can interact with Chrome. By helping to organize tabs, customize the browser’s appearance, and assist with writing, AI is making Chrome more attuned to the individual needs of its users. As these features evolve, they hold the promise of creating a more seamless and focused online experience, enabling users to better concentrate on what matters most to them.
Filed Under: Technology News, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
BMW has shown off its latest vehicle technology at the Consumer Electronics Show in Las Vegas and this includes Generative AI, Augmented Reality and Teleoperated Parking, the car maker also announced the integration of augmented reality glasses into the driving experience.
“BMW is synonymous with both the ultimate driving machine and the ultimate digital experience,” says Frank Weber, Member of the Board of Management responsible for BMW Group Development. “At the CES we are showing more content, more customisation and more gaming. This is all underpinned by our powerful, in-house developed BMW Operating System. And we will take a look to the future, of course, with perfectly integrated augmented reality and strong, reliable artificial intelligence at the interaction between human and machine.”
The CES show also sees the BMW Group demonstrate for the first time how augmented reality (AR) glasses are set to enrich the driving experience in future. Visitors can test the possible uses of AR glasses for themselves on a drive through Las Vegas. Wearing the glasses, they can see how navigation instructions, hazard warnings, entertainment content, information on charging stations and supporting visualisations in parking situations are embedded perfectly into the real-world environment by the “XREAL Air 2”. AR and mixed reality (MR) devices will become increasingly popular in the next few years, thanks to technological advances and entry-level models that are more affordable for customers. In future, AR and MR devices will be able to offer both drivers and passengers enhanced information and enjoyable experiences to complement the displays fitted in the vehicle.
You can find out more details about the new BMW Generative AI, Augmented Reality, and Teleoperated Parking and their other new in-vehicle technology over at the BMW website at the link below.
Source BMW
Filed Under: Auto News, Technology News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.