iOS 18 – Apple’s AI is NOT What You Think!


So you all probably know by now that iOS 18 is going to come with some game-changing AI features, but we’ve had some recent news that completely changed this whole story about Apple’s AI approach. So here are the latest iOS 18 leaks. Now we’re almost at the end of March, so we are in the full spring season.

And to celebrate this, we are launching our Spring Season pack for Apple Volpapers, made by our talented designer, Hannah. The spring season consists of 10 beautiful hand-designed artworks, each showing a specific spring scene. They’re all in 8K, so they look great on everything from your phone to your tablet, including your desktop. You can find the spring season, as well as all of our other 7 new packs from March, in the March selection of our Apple wallpapers, which you can download on iOS and Android. Okay, so just as a very quick recap, Apple aims to add some major AI features to iOS 18 to compete with companies such as OpenAI, Google, Microsoft, and many others who are all heavily invested in AI. Apple has been working on their own LLM, dubbed Apple GPT, as well as on a number of other AI features, including one that allows you to edit images by just describing in words what you want those edits to be. Apple has so far been spending 1 billion dollars per year while also planning to build hundreds of AI servers. Add this to what Tim Cook has recently stated: that they’ve been pouring a tremendous amount of time and effort into AI, and it is very clear that Apple is extremely serious about AI.

In fact, even the new M3 MacBook Air was advertised as the world’s best consumer laptop for AI by Apple’s own marketing team. However, Mark Urban now reports that Apple has been in discussions with Google to integrate Google’s own Gemini into iOS 18. But hold on a second; up until now, it seemed like Apple was only going to be using their own AI rather than literally partnering with one of their own competitors. And although that was true, according to Mark’s report, they seem to have changed their minds. They haven’t officially started their AI partnership just yet, but they are currently in active negotiations. As some of you might know, Apple already has a licensing deal with Google, where Google is paying Apple billions of dollars every single year to make Google search the default search engine for Safari. So it does make sense for them to continue that partnership. But why? After all the news that we’ve heard about Apple pursuing their own AI, why would they ditch all of those efforts to just use Google’s own AI instead? Well, Nature Code did report back in August that Apple’s own AI efforts were significantly behind its competitors and that a late 2024 timeframe was still unclear. So it seems like Apple was indeed aiming to have their own AI features in 2024, but ultimately they were unable to bring them to the level of open AIs, or Googles. And in the end, they decided to partner with Google, which I think is quite a shame, as I was hoping to see some true Chagypti and Gemini competition from Apple, and it looks like we won’t be seeing this anytime soon.

And not only that, but a new research paper that came out last week showcases Apple’s new large language model called MM1, and this does sound very promising until you realize that MM1 only uses 30 billion parameters. GPD4 has over a trillion, so pretty big difference here. Now it is worth mentioning that Google Gemini comes in three versions. Gemini Nano, which is their on-device model. Dawn has 3.25 billion parameters. Then there’s Gemini Pro and Gemini Ultra, both of which Google has kept the number of parameters hidden for. However, I can only assume that these are both more powerful than Apple’s MM1, possibly explaining why Apple is considering partnering with Google here. Now, if Apple does strike a deal with Google, this won’t be like a one-year partnership until Apple gets their AI on track. This partnership will likely last for many, many years, similar to their existing deal for Google Search, which actually started back in 2007 with the launch of the original iPhone. Which made me wonder: who would be paying who here? Apple does need Google to be able to implement AI features into iOS 18, so you would assume that Apple would be paying Google here, but then Google technically needs Apple too in order for them to push Gemini to even more devices and compete with OpenAI. If you think about it, Google already has Gemini on its own pixels. They’ve also struck a deal with Samsung to implement Gemini features into the S24 line, and Gemini will for sure be coming to more Android phones by default. Not even to mention that we already have Gemini as an app for both Android and iOS. So if Google manages to get Gemini on iPhones too, then they will own both the Android space and the iOS space, and therefore they will be a serious threat to OpenAI and Microsoft, which, as you all know, are also partners.

So yeah, it looks like Apple plus Google versus OpenAI plus Microsoft would be the players in this AI battle. Now, for those of you who wanted to see Apple’s own AI features in iOS 18, rather than them using Google’s, I do still have some good news here. According to Mark Gurman, only Apple’s AI features that will be cloud-based will be outsourced. All of their on-device AI features will still be developed by Apple. So my guess is that Siri will be fully powered by Gemini, whereas the AI features inside Xcode, Apple Music, Photos, and so on will be powered by Apple’s own AI in order to still hold true to Apple’s values of respecting their users’ privacy. And this brings us to a couple of other new iOS 18 leaks; they’re not necessarily AI-related, but they do give us a good idea of everything else to expect. MacRomers reports that Apple’s Freeform app will be updated with a plethora of new features and enhancements, and one of those features is set to be Freeform Scenes. So you know how in Freeform you can have these gigantic boards with notes, sketches, photos, basically anything, and you can just keep on zooming and scrolling, so yeah, it’s a pretty cool concept, but you can easily get lost on your board, especially if you have a large one. Well, with this new scenes feature, you’ll apparently be able to save specific areas of your board as scenes and then quickly go back to them, and these scenes will be shareable via iCloud, which is great, especially if you only want to share specific sections of your board with someone. And we’ll also get some new controls on the bottom for navigating between scenes, including some new keyboard shortcuts. So yeah, pretty cool. I don’t really use Freeform; I have used it in the past, but I found its performance to be quite poor, so I do hope that this gets improved in iOS 18. Aside from this, the AirPods Pro are said to finally be getting hearing aid support in iOS 18. Like, we’ve been hearing about this for a couple of years, so it isn’t really clear why we didn’t get it. After all, Apple did introduce Conversation Boost, which automatically increases the volume of someone talking in front of you. And we already have that accessibility setting that lets you use the iPhone as a microphone and then the AirPods as the speaker. So you can already use that feature as sort of like a hearing aid; you just have to keep moving the iPhone near the person who’s speaking.

The reason why hearing aid support kept being pushed could have something to do with FDA clearance. Although, from the looks of it, the FDA approved a new category of over-the-counter hearing It’s called Personal Sound Amplification Products, or PSAPs, in 2022. So, Apple doesn’t technically need FDA approval anymore for this, yet they still haven’t introduced hearing aid support for whatever reason. Well, it looks like iOS 18 will finally bring this. And aside from hearing aid support, there will apparently be a couple of more accessibility updates coming in iOS 18. There will be some adaptive voice shortcuts, which will allow you to map a spoken phrase to an accessibility setting. So you’ll no longer have to save your most-used accessibility options to the accessibility shortcuts menu. Instead, you’ll now be able to just say a custom phrase to activate them. There will also be a new category section for live speech, which will allow you to store some phrases inside these custom categories, apparently with some custom icons too, which will be extremely useful for users with speech impairments. And we’re also getting the ability to set custom text sizes for individual apps. And this is also great, as currently you can only set custom text sizes for the entire OS. And lastly, it’s been reported that iOS 18 will actually support the exact same devices as iOS 17 did. So that’s every iPhone model since the XS and every SE model since the SE 2.

Overall, I’m generally curious to see what iOS 18 ends up being. Will it be the biggest iOS change yet, as it’s been claimed to be, or will it be similar to Samsung’s AI features inside the S24 line, which, although they seemed interesting at first, ended up being quite underwhelming in the end, and I personally didn’t find much use for them at all?

About Anushka Agrawal

Leave a Reply

Your email address will not be published. Required fields are marked *