By Lance Eliot, the AI Trends Insider
AI is starting to apologize.
That’s the latest trend for AI that directly interacts with people. The notion seems to be that if the AI has to deliver unfavorable news or appears to have made a potential mistake, it ought to be civil about the matter and emit an apology. AI developers are either opting to include the apology-generating capacity or they are being pressed by system designers and managers to infuse the “sorry about that” capability.
This might seem at first glance as a marvelous addition to an AI system and would presumably be valuable to construct. Sorry to say that the AI being apologetic has both upsides and downsides.
Let’s begin by considering a context that will help to reveal the pros and cons of AI-powered apologies. Imagine that you apply online for a car loan and the AI system determines that you are not worthy, as it were, and promptly turns you down. The belief is that this would be an ideal moment for the AI to offer you a “heartfelt” apology.
It might go something like this: Dear loan applicant, it is with great sorrow that I must inform you of the unfortunate news that your request to borrow funds to buy a car is hereby denied. Please know that you are not alone in having been spurned and accept this apology for any discomfort that might arise from this outcome. Sincerely, the AI system that reluctantly rebuffed your request.
Do you think this apology will make the person feel any better about the AI-powered decision?
Well, I doubt it would for most people, nonetheless, there is an emerging trend of having AI systems produce these kinds of messages.
We can mull over and debate the tradeoffs involved in having AI that provides these kinds of apologies. There are claimed benefits of having the AI appear to be deferential, while there are criticisms that this is a mockery of humanity and diminishes the potency of apologies all told. Before getting to the overall set of tradeoffs, imagine the myriad of areas that AI is being applied, beyond just adjudicating car loan candidates, and envision how those varied instances might opt to leverage a purposely apologizing AI system.
Here’s an intriguing aspect: Will the advent of AI-based true self-driving cars necessitate the AI being apologetic from time-to-time, and do we really want or expect apologies to be issued by AI driving systems?
Let’s unpack the matter and see.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered a Level 4 and Level, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/
To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/
The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/
Self-Driving Cars And Offering of Apologies
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.
Some assert that it makes sense for the AI to render an apology when the situation warrants doing so.
You might be initially baffled that there would ever be any basis for the AI driving system to issue an apology to the passengers of a self-driving car. We certainly do not expect human drivers to particularly offer apologies, and thus there is no well-established tradition of drivers making such utterances. Sure, on a rare occasion a ride-sharing driver might apologize that their car reeks of tuna salad because they just managed to wolf down their lunch on the way to pick you up, but otherwise the number of apologies we receive while riding in cars has got to be pretty low, nearly zero it so seems.
Well, take a step back and ponder the following scenario.
You are nestled in a self-driving car and enjoying some quiet time on the way to work. The seats can fully recline since there aren’t any driving controls inside the vehicle (the steering wheel and pedals for true self-driving cars are expected to be excised from the interior and no longer accessible by humans since the AI is doing all the driving).
The AI is dutifully watching the roadway and attempting to provide you with a safe journey from your home to the office. Thankfully, the AI is coping with the darned traffic on the freeways and the equally exasperating jammed traffic on the busy city streets. You can daydream and meanwhile, the AI is observing the surroundings like an alert hawk.
All of a sudden, the AI detects via its sensors, such as by using cameras, radar, LIDAR, and so on, that a pedestrian is weaving erratically on the sidewalk and appears to be nearing the curb of the street. It is hard to discern whether the person is going to actually come into the street. Maybe they will turn back toward the buildings at the last moment and avoid coming into the roadway. Or, in a worst-case possibility, the person might opt to pop into the street, right in front of where the self-driving car is headed.
What to do? What would you do? What should the AI do?
For human drivers, we confront these kinds of driving situations all the time. You have to make split-second judgment choices about whether a pedestrian is going to foolishly wander directly into the path of your moving car, and thus calculate in your mind whether to slow down, speed-up, swerve or take no action. Any of those options might be right. Of course, any of those options might be wrong too.
In the case of AI driving systems, to-date the approach has involved setting up the AI to be extraordinarily cautious, more so than perhaps most human drivers might be. If there is even an inkling of an upcoming car crash or similar incident, the AI is going to try and avert the chances by taking the “safest” course of action (more on this in a moment).
Assume that the AI hits the brakes of the self-driving car, doing so to be at a complete stop before reaching where the pedestrian might end-up being. By calculating the intersecting points at which the self-driving car and the pedestrian might have arrived, the AI calculated the stopping distance and figured out that an emergency stop could likely ensure that the self-driving car and the meanderer would not come in contact with each other.
The pedestrian is saved.
Suppose though that the person did not come into the street, and at the last moment veered back onto the sidewalk. This means that the self-driving car needlessly came to a halt. You could presumably give the AI some thankful credit for having avoided a potential accident, but the accident never materialized and so you could equally criticize the AI for being an overly reactive scaredy-cat driver.
Meanwhile, remember that there is a passenger inside the self-driving car (we’re still pretending it was you). There you were, comfortably reclining and blissfully inattentive to the roadway conditions, rightfully so, reliant upon the AI to tackle the driving and make the proper decisions thereof.
Upon the sharp braking action utilized by the AI driving system, you are tossed around like a sack of potatoes. Maybe a tad bit of whiplash involved.
What the heck happened, you would likely be wondering, having no clue why the AI made the unexpected and utterly unannounced stop.
At this juncture, at least as a minimum, you would be expecting the AI to explain what occurred. If the AI does not indicate why the roughshod driving action was undertaken, you are beset with doubts and would feel uneasy that the AI can adequately drive the self-driving car. Perhaps the AI has gone nuts, or an electronic power surge made its bits go bad.
We are now at the million-dollar question. Should the AI apologize to you?
Try this: Dear rider, you undoubtedly sensed that the self-driving car came to a sudden halt, which please accept my apologies for having done so. This was done out of an abundance of caution and potentially might have saved the life of an errant pedestrian. As a driving transgression, it is not what I aspire to do while driving, but hopefully, you understand that such moments have merit and your forgiveness on this matter is welcomed. Sincerely, the AI driving the self-driving car.
Notice that this is a twofer, consisting of an explanation and simultaneously an inserted apology. The explanation would tend to ease your concerns about the AI having gone off the deep end. Does the apology also make you feel better?
Let’s turn our attention to AI that issues apologies and those earlier alluded to tradeoffs involved in having the AI do so.
For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/
On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/
I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/
Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
AI Apologizing and The Merits Or Demerits
Some insist that any apology by an AI system is completely, unabashedly, a fraudulent and hollow act.
The AI is not sentient. For those of you that might be tricked by the barrage of media postings that suggest that AI is sentient or near-to, do not fall for this hogwash. There isn’t any AI today that can be construed as sentient and there is nothing on the horizon that implies or purports to show that sentience can be reached in the near future.
Hence, the point is that the AI generating an apology is not the same as a human that proffers an apology. The AI cannot from-the-heart provide you or anyone with a heartfelt apology. This does not compute. In that overall sense, it can be argued that an apology by AI is nothing but fakery.
Worse still is the notion that by providing an apology there is an insidious insinuation that the AI is indeed sentient. In more formal parlance, this is tantamount to anthropomorphizing the AI. When the AI as a computer-based machine is made to seem like it is human, real humans will begin to think that the machine is a human or an equivalent.
That’s a dangerous slippery slope.
People that believe the AI can think and act as a human can are going to become reliant upon the AI to do things that the AI cannot achieve. For example, humans have common-sense reasoning (I realize some might chuckle and note that they know humans that are not able to exercise common-sense, but put that cynicism to the side for the moment), but there aren’t AI systems yet that have anything of the same. Thus, the AI is not going to apply common-sense to any instructions you might share with it, and nor should you expect the AI to employ common-sense when performing a task for you.
Speaking of cynics, some people are less heated-up about these AI apologies and claim that any human with a modicum of intelligence knows that the apology emitted by AI is merely a made-up artifact by the humans that programmed the AI. In this viewpoint, there is a hardened belief that everyone already should realize that the AI is just a computer and like any computer that spits out a report or displays a message, the whole thing is just a bunch of 1’s and 0’s being shoveled to-and-fro.
And, it could be argued that getting an apology, albeit from something that does not “know” what an apology is, provides a nice touch of humanity for the circumstances at hand.
The developers of the AI at least realized that sometimes the AI might go afoul. That alone is important and perhaps a shocker since many times an AI system will make a mistake or a misstep, and it doesn’t realize what has transpired. Thank goodness that there is an apology, acting as a canary in the cage, showing the light that the AI has been embedded with some kind of error detection and response capabilities.
The words, too, of an apology can be soothing. Despite the AI not being human or anything akin, the words alone are being spoken and received.
Returning to reclining in a self-driving car that has come to a jolting halt, perhaps the apology can be the icing on the cake, wherein the cake was the explanation and the delightful icing is the supplemental apologetic words.
How far might this go?
There are stratified levels of apologies being cooked into some AI systems. There is a mild apology that is used when the incident or activity was only modestly disturbing for the human. Waiting to be used is the medium level apology that lays out a lengthier indication of what the AI is sorry about. Then there’s the heightened apology, profusely seeking to offer a sorrowfulness befitting a Shakespearean play.
The passenger in a self-driving car that got jostled by the sudden halt might get a mild apology if the stopping action was not especially severe. On the other hand, if the stop was the kind that makes your teeth rattle right out of your skull, the heightened version of the apology would likely be issued.
Skeptics have an entirely different viewpoint on these AI-based apologies.
Give me reparations or something tangible to make me whole and soothed, the skeptics exhort, rather than a bunch of gooey words of an apology.
In the case of a self-driving car, the AI ought to inform you that you will be getting a ten percent discount on the driving fare for the ride, or maybe waive the bill entirely. If there is a loyalty program associated with using the self-driving car, perhaps the AI can add some padded miles into your account, doing so to try and appease your dismay at the sudden stopping action. Etc.
In short, there are the where’s the beef critics that eschew the apologies if there isn’t a bona fide payoff associated with the words themselves. In this view, any apology from either AI or a human is essentially worthless unless it is backed up by money in their pocket or some other form of compensation that deals with the matter underlying the incident that precipitated the apology, to begin with.
Another concern by some about apologies by AI is that it might open a veritable Pandora’s box. The act of the AI apologizing could be construed as an acknowledgment that the AI did something wrong. This admission can be then utilized for any legal action against the company that made or fielded the AI system.
For the self-driving car, potentially the automaker or self-driving tech maker is shooting their own foot, one might say, due to admitting to being at fault.
The next thing you know, there are zillions of lawsuits launched by people that have received AI-based apologies, and they are all going to use as concrete evidence that the AI apologized, which certainly must mean that the AI was in the wrong and that the maker of the AI knew or could have known that the AI was going to bungle things.
Returning to the skeptics, potentially the AI offering an apology is almost like dollars in their pockets, given that the AI is handing them on a silver platter that the AI was at fault and presumably some compensation should be forthcoming accordingly, perhaps by legal court proceedings.
On a final note, there are also the perfectionists that have something to say about AI-based apologies. In their opinion, there should never be any instance of an apology being necessary, namely, the AI ought to be working perfectly all the time. If the AI is being built with apologies included, this means that the AI is not being built to achieve perfection. This is obviously then a mistake at the get-go because the AI should not be anywhere in the real world until it is perfected and won’t ever need to use an apology.
How do you like those apples?
Realists would laugh at this kind of sentiment and likely say that the perfectionists are idealistic, plus the common line these days is that striving for perfection is allegedly the enemy of the good (ostensibly borrowed from Voltaire, and a variant of the so-called golden mean ascribed by philosophers such as Confucius and Aristotle).
No apologies, no regrets, goes the famous line too.
Now that you know about AI-powered apologies, it might sweeten those human uttered apologies that you get from time-to-time.
Of course, human-generated apologies are not necessarily always genuine and whole, which come to think of it, even once AI does become sentient, perhaps we won’t know if the AI is just trying to offer hollow platitudes to appease us versus sincerely expressing a deep felt sorrow.
That seems like a good juncture to wrap things up on this topic and offer this sage quote by the Bard: “Parting is such sweet sorrow that I shall say goodnight till it be morrow.”
Copyright 2021 Dr. Lance Eliot; This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]