appleOS 26

The public betas for Apple’s various operating systems are now out now, which will gradually make their way to many of the the 2.35 billion active iPhones, iPads, Macs and other Apple devices reports as active, starting in the Autumn.

I haven’t quite dared to try any of them in person yet, but from what I’ve observed this year’s operating system releases are generally positive.

The new Liquid Glass UI theme is a welcome change, recalling the early days of Aqua when Apple crafted inventive, whimsical interfaces. Those were certainly fun times, but it’s important to remember it was a different era and Apple had far fewer customers. Early versions of Mac OS X also ran painfully slowly in part because of their fancy user interface. I’m hoping Liquid Glass won’t suffer the same performance issues, especially on older devices. I’m also aware of the legibility issues some beta testers are reporting, but I’m confident these will be ironed out before it rolls out in September.

The AI improvements are minor and there seems to be less focus on Apple Intelligence as a brand. I’m not sure why AI needed its own brand name really. It struck me that the likley reason for this seperation was because it was built by a different team within Apple, rather than it making sense from a user’s perepctive. Some of the new AI features do look interesting however: being able to call Apple’s models from within Shortcuts, and 3rd parties being able to utlise local LLMs (thus not requriing an internet connection) is huge. The problem is that the only devices to support this are relativly recent ones: 2023’s iPhone Pro line and 2025’s iPhone lineup. Even the base iPad which you can buy brand new from Apple today does not support local AI models. This means most apps will either have to make their AI powered features optional, ensure there is a server-side backup in place, or restrict their market to those on the latest devices. Since many apps are cross platform anyway, I think most developers will go with option 1 or 2.

The iPad has received the most substantial update, with full windowing support now available. I have no complaints about this, and can’t wait to try it out. I was particularly pleased to see that it will work even on the iPad mini and the 2020 iPad Air. While many people will now finally be able to harness the device’s powerful hardware, I still think the iPad’s biggest drawback is that it can only run software from the App Store. There are so many great apps like Visual Studio Code and Chrome that are not there for commercial or Apple’s policy reasons.

The fact that windowing is not available on the iPhone is also curious. When Apple split iOS and iPadOS into separate brands a few years ago, the reaction was mostly positive; finally, the iPad was getting the attention it deserved. But thinking about it now, I have to wonder if the reason was to reduce any expectation that features added to the iPad would also appear on the iPhone. At this point, with Apple Silicon, Apple is essentially selling its customers the same computer three or four times with only minor differences. They have different-sized screens, some have a keyboard attached and others rely on a touchscreen. Some have a better camera, and built in cellular. The core of the devices, even the operating system, near enough identical. if you own a recent iPhone, iPad, Mac or Apple Watch – you’ve bought the same computer multiple times. The iPhone’s Apple A18 chip has a similar level of performance to the Mac’s M1 chip, which is still a ridiulasly fast chip. There is no reason why an iPhone could not become a laptop or full desktop simply by plugging it into the a keyboard and monitor. In decades past, mobile phones lagged far behind desktop PCs in terms of performance, but today most people could use their phone as a desktop PC or laptop: the chip is powerful enough and there is ample memory. What’s holding this back is not the “free market” or a lack of demand, but, I suspect, Apple’s preference to continue selling us multiple devices. In this respect, the distinction between device classes is more a marketing one than a technical one.

Overall, I think the ’26 releases should be exciting, even if I wish Apple would embrace change product category perspective a bit more. Would today’s Apple of 2025 have released the iPhone in 2007 when the iPod was still king? I’m not so sure.


Another SharePoint Security Flaw

Ellen Jennings-Trace writing for Tech Radar:

New estimates regarding the recently-exploited Microsoft SharePoint vulnerabilities now evaluate that as many as 400 organizations may have been targeted.

The figure is a sharp increase from the original count of around 100, with Microsoft pointing the finger at Chinese threat actors for the hacks, namely Linen Typhoon, Violet Typhoon, and Storm-2603.

The victims are primarily US based, and amongst these are some high value targets, including the National Nuclear Security Administration - the US agency responsible for maintaining and designing nuclear weapons, Bloomberg reports.

Microsoft makes it clear this is an issue with on-prem instances of SharePoint, not the cloud based Office 365 solution.

One might question why an organisation would choose to run these services on premises in 2025. In my experience, banks and other security-focused institutions often believe their own teams can outperform Microsoft Azure, Google Cloud or AWS. Yet time and time again, we see on-prem is actually less secure than the cloud. Unless your service is complelty air-locked from the Internet, I see very few reasons to be relying on on-premisis software, especially Microsoft products, in 2025.

Hopefully, running on-premises commercial services like this in the name of security will soon be consigned to the trash can of computing history, along with other security theatre measures often imposed by IT administrators, such as enforced password expiration.


Virtual Private Nonsense

Adverts for Virtual Private Networks (VPNs) are now a regular feature on many podcasts. A common theme I’ve noticed is the attempt to justify using a VPN by claiming that public WiFi networks are inherently unsafe without one. Take this recent example:

When you connect to an unencrypted network in cafes, hotels, airports, your online data is not secure. Someone on the same network can gain access to your information, passwords, bank logins, credit card information, and other things that you don't want in someone else's hands.

This is absolute nonsense. Yes, it's true that if the WiFi network doesn't require a password to connect to, then your data will be sent in the clear and could theoretically be accessed by anyone, as made famous by the app Firesheep. This was in 2010 however. In fifteen years since then, and the twelve years since the Snowden leaks, the vast, vast majority of websites have adopted their own encryption to protect data in transit. Even before then, any reputable site handling sensitive information (such as online banking or payment processing) was already using TLS (Transport Layer Security). In 2025 (and really for the past decade), you do not need a VPN when on public WiFi.

Regardless of whether the WiFi network is encrypted, there are inherent risks in connecting to a network you can't necessarily trust. Most devices have built in firewalls to mitigate this. As long as that's switched on, you're probably going to be OK.

So why would anyone use a VPN? As far as I can tell there are only three reasons why anyone would use a VPN:

I would not hesitate to use online banking over Starbucks WiFi. In fact, I’d be more worried about someone peering over my shoulder than any threat from the network itself.

So don’t be taken in by the scaremongering ads. The chances are, you don’t need a VPN.

But if you’ve got another reason for using one, let me know in the comments below.


Microsoft AI Gimmicks Risk AI Fatigue

Microsoft, in its ongoing quest to make Windows 11 feel increasingly clumsy and tone-deaf, has now decided to add a new “AI Tools” menu to Windows Explorer. One can almost picture the product management meeting, where some directive from on high decreed: “We must be seen to be using AI in everything, everywhere and all at once.”

While they may have a tendency to over-explain the obvious (as shown by this 1,000+ word blog post just to say they’re replacing passwords with PassKeys – something that could be summarised in a single paragraph), the decision to lump a collection of seemingly unrelated features into one menu simply because they involve “AI” suggests either a complete absence of user centric thinking or a rather desperate attempt to appear as an AI leader.

The screenshot shows four menu items. The first, “Visual Search with Bing” (one can only imagine how many bureaucrats had to approve that name), allows you to search the web using something similar to Google’s reverse image search or Apple’s Visual Lookup. The other three are image editing tools: one for adding a background blur (which uses AI to detect the main subject in an image and blur everything else), one for erasing objects (where AI estimates what would be behind the removed element and fills it in), and one for removing the background entirely (essentially the same as the blur tool, but deleting the background pixels instead of blurring them). There’s also another totally separate “Ask Copilot” menu, which presumably also uses AI, but I’m guessing this was built by another team which would explain why it’s in a totally different place in the context menu.

All of these features are genuinely useful, and it’s great to see them included in Windows (presumably using on-device processing rather than relying on the cloud). However, grouping them under an “AI Tools” menu doesn’t make much sense. “AI” in this context means machine learning-based processing and is becoming as commonplace as traditional processing tasks. Designing a user interface around the underlying technology rather than the user experience is a fundamental mistake. That's not to say a UI shouldn't ever mention "AI" – it is important to let users know when AI (especially generative AI) is going to be used or was used, as it provides a useful prompt to scrutinise the output more carefully and helps set expectations. But Microsoft’s overuse of it just makes them seem a little desperate to be seen as jumping on the AI bandwagon and risks users switching off every time they see yet another "AI" feature.


The Rise and Fall of Vector Databases

Jo Kristian Bergum on Twitter:

This surge was partly driven by a widespread misconception that embedding-based similarity search was the only viable method for retrieving context for LLMs. The resulting "vector database gold rush" saw massive investment and attention directed toward vector search infrastructure, even though traditional information retrieval techniques remained equally valuable for many RAG applications. … However, the landscape has evolved rapidly. What started as pure vector search engines now expand their capabilities to match traditional search functionality. Vector database providers have recognized that real-world applications often require more than just similarity search.

I’ve never quite understood the hype around using similarity search to retrieve content for a RAG system. Yes, cosine similarity can be useful in certain situations, but in many cases, there are better ways to find the right content. This is especially true when the user’s question has little semantic resemblance to the answer. Not to mention when you have potentially multiple versions of the same documents that may or may not be relevant depending on the context of the question. Now every major database supports vector search, is there even a need for a dedicated product? I’d recommend giving the latest episode of the Latent Space podcast a listen where they explore these issues, and alternatives, in more depth.


Apple’s Notification Summaries: Why Starting Small Still Matters in AI

A iOS notification summary from BBC News that reads Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol's office."

Image source: BBC News

Graham Fraser reporting for BBC News about Apple's new notification summarisation feature:

A major journalism body has urged Apple to scrap its new generative AI feature after it created a misleading headline about a high-profile killing in the United States.
The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione.

The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

I’ve worked on implementing AI into software products for the past seven years, and the first rule is always to start with a narrow domain, carefully assess how effective it is, and, if your approach is working, gradually expand your domain coverage. When it comes to notification summaries, I can’t help but wonder why Apple didn’t adopt this approach. Instead, they’ve delivered something that feels more like a hasty student project than the polished innovation we’ve come to expect.

Specifically, I would have started by limiting the product to summarising notifications where summaries are genuinely most useful. Of course, not being a product manager at Apple, I haven’t done the research, but let’s assume messaging apps would top the list. A "TL;DR" summary for lengthy WhatsApp group chats, for instance, could be genuinely valuable. On the other hand, attempting to summarise product promotions or delivery notifications from Amazon, breaking news, or even the alerts I get when my wife logs a feed on our baby-tracking app feels far less useful. Most of the complaints I’ve seen online seem to involve apps like BBC News, Amazon, or other non-communication apps. Apple would be better off avoiding attempts to summarise these types of notifications and instead focusing on apps where summarisation truly adds value: messaging apps such as WhatsApp or Slack.

That’s not to say the current implementation is flawless when it comes to messaging apps either. In an example shared by WSJ technology journalist Joanna Stern, the iPhone mistakenly assumed her wife was referring to a non-existent husband. The issue likely arose because Stern has her wife saved as “Wife” in her address book. The smaller language model onboard the iPhone, relying on statistical assumptions, concluded that someone called “Wife” must be referring to a husband when mentioning another unnamed person. It’s exactly the sort of edge case1 that could have been caught more easily during testing if the first version had maintained a laser-like focus on summarising messaging app content.

By starting small and focusing on where summarisation adds the most value, Apple could have delivered a more refined and impactful feature, avoiding many of the issues we've seen reported .


On MKBHD’s New App

John Gruber writes

A swing-and-a-miss from MKBHD. Criticism of the app is on two separate levels, but they’re being conflated. Level 1: the app is not good. Level 2: a paid wallpaper app? — LOL, wallpapers are free on Reddit. That second form of criticism — that there shouldn’t even exist a paid wallpaper app — is annoying and depressing, and speaks to how many people do not view original art as being worth paying for. But it also speaks to the breadth of Brownlee’s audience, which I think is more tech-focused than design-focused.

What’s even more annoying and depressing is the inability to separate the appreciation of art from the act of commerce. Perhaps people didn’t expect someone reportedly already making millions to charge so much for so little. The app description mentions "teaming up with top-tier digital artists," so it’s not as if it’s even showcasing lesser-known artists who might benefit from the app's exposure and thus warrant some philanthropy. Gruber is wrong here. You can appreciate good design while also thinking £12/month is a rip-off.


Be the Main Character

Benj Edwards for Ars Technica:

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it’s bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It’s available on the iPhone app store, but so far, it’s picking up pointed criticism.

A social networking app where you are the centre of the universe, surrounded by bots constantly providing you with positive reinforcement and worshipping your every word. Elon Musk must be kicking himself – he could have saved $44 billion.


Troubleshooting GLIBCXX_3.4.26 Error on Raspberry Pi: .NET 9 Compatibility Issue

I was recently trying to compile a .NET project as a self-contained binary for linux-arm to run on my Raspberry Pi. When trying to run the binary, I kept getting the following error:

/usr/lib/arm-linux-gnueabihf/libstdc++.so.6: version GLIBCXX_3.4.26' not found (required by ./[executable])

It turned out the issue was caused by using .NET 9. Downgrading the project to .NET 8 resolved it. I'm unsure whether Microsoft or Raspbian needs to address compatibility with .NET 9, but as it didn't impact my code, it wasn't a problem for me.


Apple Vision Pro…blematic

Image credit: Apple (https://www.apple.com/ca/newsroom/2023/06/introducing-apple-vision-pro/)

Three and a half months after the release of Apple Vision Pro, and I am finding it difficult to get on board with Apple's vision for the future of computing. While admittedly I haven't tried the Vision Pro due to it not being available in the UK, and I'm sure I would be momentarily wowed by the experience, I just can't get excited about what I see as a deeply dystopian vision of the future.

Let’s start with the vision itself. We spend too much time on screens as it is. Endless doomscrolling and the inability to switch off are widely believed to be behind rising anxiety and depression. Apple, to their credit, has attempted to remedy this with features like Screen Time which allow limits to be put in place. The very concept of the Vision Pro goes against this progress. It's a screen strapped to your face. Right now, the discomfort of a hot and heavy device acts as a deterrent against using it too much. My worry is that as technology progresses and much like the smartphone, the Apple Vision Pro will get lighter and more comfortable — more addictive. If the headset follows the path of the smartphone, then it will eventually be something we want to wear as much as possible. We live in a world of filter bubbles. If you use Twitter (sorry, X), then your view of reality can be vastly different from your next-door neighbour who also uses Twitter, due to the fact that you both like and follow totally different posts and sets of people. Smaller social networks don’t need an algorithm to do this as they will typically only attract people of a certain political leaning in the first place. The end result is the same. Filter bubbles are an obvious problem for social cohesion, and while they are not limited to social networks (hello Fox News), I can’t help thinking that when literally everything is viewed through a screen — yes, your entire reality — the problem will only get worse. If you've seen Black Mirror, then I don’t need to explain why, but let me anyway. Imagine an app for a future revision of Vision Pro (or its rivals) that can be worn all day. The app allows users to employ Twitter-style blocking, so the faces of people they would rather not see are blurred out. Could someone with right-wing views use an app to blur out TV screens showing CNN or the front page of the New York Times, allowing them to walk down the high street and not be bothered by reality, or what they consider dissenting opinions? In 2020, as California was struck by awful wildfires, people noticed something strange. Their smartphones were automatically adjusting the apocalyptical red-tinted sky to look more “normal.” The image of humans using VR goggles to deny climate change or insert virtual green spaces into an otherwise industrial wasteland is just terrifying. In short, I am deeply worried about a future in which our entire reality is seen through the filter of a screen. The Apple Vision Pro can’t do any of this right now, and I doubt Apple would ever allow it to. Yet as the Twitter/Musk debacle shows us, the people who run companies can change, and their policies with it. Not to mention that if the Apple Vision Pro were to be a success, then there would naturally be rival products released which may not be as restrictive.

All of what I’ve said so far is admittedly far-fetched, and perhaps assumes the worst of humanity. So let me focus on the Apple Vision Pro as the product it is today. Apart from being deeply antisocial, the main problem I see is that it is an iPad strapped to your face but priced as if it were a fully-featured MacBook strapped to your face. The iPad and tablets, in general, are great content consumption devices, but the limitations inherent in mobile operating systems make them poor substitutes for a laptop. Samsung and Apple have taken steps to address this, including making the iPad more like a laptop with the addition of support for trackpads, mice, and keyboards. Yet the apps themselves are more often than not baby versions of the same applications available for desktop operating systems, if an app is available at all. There are exceptions — Logic and Final Cut Pro. Overall, it seems a combination of iOS (sorry iPadOS) limitations, Apple’s greedy business practices, and the poor ergonomics of tablet devices has contributed to the iPad not taking off in a productivity context, outside some niche verticals. By designing the Apple Vision Pro’s software stack in the image of the iPad, the device is unfortunately severely limited. Had the device been based on macOS, with the ability to run Mac software, then it would be a completely different story. I understand that technical limitations likely prohibit this: the accuracy of Vision Pro’s gesture recognition simply isn’t good enough to allow Mac apps designed for use with a mouse to be comfortably driven. Which brings me to my final criticism. One of the most touted features of the Vision Pro is the ability to VNC into a desktop Mac and control it remotely. The fact that this was pushed as a feature is the most damning indictment of the Vision Pro’s capabilities as a computing device. A proper computing device worth its $3,500+ price tag wouldn’t need the ability to remotely connect into your other $2,000+ laptop — it would be as capable itself. On various forums I’ve seen happy Vision Pro customers tout the ability to work on a plane for hours on end with both their MacBook and Apple Vision Pro. Here we go again with the dystopian future: can someone not take a flight for a few hours without needing to be productive? For goodness' sake, read a book, listen to a podcast, or just look out of the window!

So, while the nerd in me is ready to be wowed by the Apple Vision Pro, I find myself despairing as I think about a potential future where we are always plugged in, online, and unable to escape work. I can’t help but think this isn’t what technology was supposed to be about. I have to ask myself, would the figure from Apple’s famous 1984 commercial who throws a sledgehammer at the big screen be wearing an Apple Vision Pro, does the screen itself represent Apple latest Vision?