Apple’s large language model (LLM) that will power iOS 18’s AI features could entirely run locally. This would prioritize user privacy and offer faster response times.
However, running AI features on-device will have downsides as they might not be as powerful as on some Android phones.
iOS 18’s AI features might value privacy and experience
Android phones with generative AI features offload all the processing and heavy lifting to the cloud. This makes them powerful and ensures the phone’s AI processing capabilities do not limit them.
The catch is the privacy risk involved, as your data might be shared and uploaded to the cloud. Plus, the high response times lead to a poor user experience.
Apple aims to bypass these problems with on-device processing. So, the phone will process the AI commands locally instead of offloading them to the cloud. This is a more privacy-friendly approach, though Apple might have to limit its AI features based on the chip’s AI processing capabilities.
Apple is behind Google and Microsoft in the generative AI features race. Running its LLM locally could give Apple’s AI features a unique advantage over its competition. Plus, the company can tout the privacy benefits of its approach.
A previous report claims that the iPhone 16 will ship with a more powerful Neural Engine to power some of iOS 18’s Generative AI features. This purportedly means that some demanding AI features might not come to existing iPhones.
Apple could reportedly partner with Google and integrate Gemini to power iOS 18’s more powerful generative AI features.
Apple’s marketing strategy could help its AI features stand out
Additionally, in the latest edition of the Power On newsletter, Bloomberg‘s Mark Gurman claims Apple will adopt a different marketing strategy for iOS 18’s AI features. Instead of highlighting the generative AI tools, the company will show how the AI features will benefit users in daily use.
Apple’s marketing prowess could help its generative AI features stand out even if it’s less powerful.