top of page

OPINION

When it comes to language tools, we can’t let Apple rewrite the social contract

Joe Sexton

29 Jan 2025

 

In what’s becoming a well-worn Apple ad, a lethargic, easily distractible bloke pauses twirling headphones in his office chair to use ‘Apple Intelligence’ to jazz-up a casual draft email. The receiver, not clueing the advantage ‘Warren’ has taken, shows astonishment towards his colleague’s new writing style. As viewers, shouldn’t our own astonishment be towards the brazen marketing route Apple is taking with this one?

 

It doesn’t seem too great a stretch to conclude the world’s biggest tech company is here thumbing its nose towards the authenticity of communication. In another video ad last year, Apple failed to read the room by promoting a mercilessly reductive iPad. The outcry prompted an apology. And yet the ‘Warren’ ad may be a stronger example of Apple giving a rosy filter to what should popularly be denounced as a danger.

 

There is, however, simply no point holding a gripe towards language software itself. Its benefits are legion – instructions, itineraries, study notes, to name a few. It spares people pouring time into writing efforts that aren’t particularly meaningful or uplifting, and it gives access to language realms that might otherwise seem deliberately confusing. Responsible applications of AI language software should therefore be celebrated. But whilst significant appetite truly exists for text-sprucing, tone-adjusting software such as ‘Apple Intelligence’, can we really allow duplicitous use of it to be mainstreamed?

 

What surprises about the ‘Warren’ ad is not the phone technology itself, but Apple’s tacit approval of a guy using it sneakily. So often, tech companies lob their products into the marketplace and let consumers navigate any ethical concerns for themselves. An example would be the recent ‘collaboration’ between Meta and Ray Ban – glasses that can conveniently, surreptitiously film their wearer’s surroundings. Online advertisements with musical artists James Blake and Anderson .Paak make no effort to stress the glasses’ proper terms of use. It’s as though, for such a product, the inappropriate applications are so obvious they don’t need to be discussed. In reality, inappropriate usage itself is so obvious (they’re secret cameras, for God’s sake!) the makers should display more caution.

 

But in Apple’s ‘Warren’ ad, it appears the company isn’t so much ignoring the dubious applications of ‘Apple Intelligence’ as delighting in them. ‘Warren’, the character, might only intend to save time, but in an office that’s apparently slow on the language software uptake, he enhances his own reputation in turn. In his fictional universe, ‘Warren’ probably wouldn’t receive punishment if he was caught. But in the ‘real world’, a great crime accrues from his example to the masses. For any student wondering about the prudence of using ‘Apple Intelligence’ or ChatGPT to cheat assessed coursework, watching the ‘Warren’ ad must resonate as a vote of encouragement.

 

‘Apple Intelligence’, on its own promotional page, is trumpeted as ‘AI for the rest of us’. Ignoring the tired ‘common man’ appeals, Apple’s express wish is seemingly to be among the companies parachuting AI into the everyday. Perhaps it envisions driving us all into an age well ahead of Warren’s workplace, where the use of AI writing tools will be so widespread as to be presumed, thereby removing the relevance of ‘acknowledgement’. However this would require tweaking the terms of the present social contract: by current standards, misrepresenting oneself via such technology remains a wrong. With this in mind, the ‘Warren’ ad looks less like a thoughtless marketing effort – rather, a deliberate attempt to shift the dial part of the way. Via ‘Warren’, Apple probably intend to provoke lounge-room criticisms of the nature of this article, only so these can be hosed down in turn.

 

So let’s try to find some points especially difficult to douse. Yes, we might come to accept uncredited AI in stereotypically mindless workplaces like Warren’s, but consider how we’ll fare if it’s commonplace in loftier or more dynamic professional scenarios – giving the basis of a school principal’s address, for instance, or masking the sloppy reasoning of a doctor. Worse, consider the secret application of writing tools in social or romantic interactions. Imagine being wooed into friendships or agreeing to dates on the basis of phrases which were polished without the real attention of the sender. Sure, ‘real life’ might be the testing ground where such fraudulence comes to be revealed, but wouldn’t it just as often escape detection? And if we couldn’t be absolutely sure of authenticity, would we build into our ‘critical thinking’ a degree of skepticism towards all forms of non-verbal communication? When it comes to the popular distrust of politicians, some of this is surely explained by perceptions their words are frequently not their own, are shaped instead by staffers or wiser linguistic heads. In a similar vein, in a world that mashes the ‘self-made’ and the ‘artificial’, mightn’t many of us begin to show apathy towards all dialogue that isn’t face-to-face and spontaneous?

 

Of course, these aren’t quandaries for the future. The kind of language software that exists at present is perfectly capable of facilitating such an atmosphere of uncertainty. What might protect us for now are only those social norms towards transparency, which Apple and its ilk seemingly mean to shake-up. Perhaps, from what they’re seeing at the prow of ‘human progress’, these companies genuinely believe such a world to be unavoidable. But would that view really entitle advertisements of ‘Apple Intelligence’ to not only avoid disclaimers about honesty or restraint, but promote the opposite?

 

Unsurprisingly, there’s also no mention of how written expression seems to cut close to the bone of what we often conceive as ‘identity’. After earlier leaps of technological efficiency, it would have been silly to tirelessly distinguish ‘manmade’ versus ‘made with help’. It was never important to admit a car had predominantly been assembled by robotic arms, rather than by manual labour, for instance. Or even that a drug had been mechanically manufactured, as opposed to painstakingly prepared by a chemist. In such cases, non-human labour is actually more reliable. But when technological progress comes to implicate communication itself, and the substance within, we’ve arrived at dicier ground. There are simply no examples of over-hyped fears from the past that tech companies can use to placate alarmists. Nothing that’s come before is relatable; ‘spellcheck’ and ‘synonyms’ functions on long-standing word processing apps aren’t even in the same postcode.

 

By employing AI writing tools, we’re not simply palming off our time (in a fashion that might very often be celebrated, we should grant), but also some of what it means to be imperfect brains with distinct impressions of the world. Sure, the words we summon aren’t everything. We’re told our actions matter more, but truly the distinction between ‘words’ and ‘actions’ isn’t difficult to blur, and we can’t refute that our written communication contains notes of personality and worldview. There are many laudable benefits of language software, but there must continue to be preserves where we’re expected to scratch our heads until we can find the right thing to say, and where there’s condemnation for taking shortcuts without admitting it. We’ve lived for two years in a ChatGPT universe, but expectations around honesty in certain contexts have hardly changed, and there’s no good reason for adjusting these only because AI writing tools are becoming more accessible. Given they’re forging new ground, companies like Apple may well feel it’s in their power to decide the ethical parameters of communication into the future, but they now have to hear, in a few vital respects, they’re getting ahead of themselves.

bottom of page