Read / Watch / Listen
Read the Article
Alright, let’s start with a couple of jokes. . .
Why did the robot get upset? Because everyone was pushing his buttons!
Okay, okay, how about this one:
Why are some robots insecure? Because their intelligence is artificial!
Alright, enough of the “dad jokes”. . . you might even call them CHILDISH. But it sets the tone nicely for today’s podcast. The secret to boosting business when using artificial intelligence (AI) is setting the right expectations. If we treat AI like a CHILD, we’ll get much more out of it. But, if we OURSELVES are too childish in the way we guide its development, we may end up in a whole heap of Terminator-style trouble. . .
Managing expectations around (AI) in a rapidly evolving and fluid global business environment is a challenge that not many of us have yet mastered. A steady stream of energetic media and news reports hailing great new breakthroughs in AI technologies increasingly raise the bar for what is possible, but businesses the world over keep falling into the trap of believing that AI is infallible.
And that’s where the trouble starts. . .
Despite the almost magical developments in AI over recent years, it still goes wrong. Now, that’s not necessarily such a big issue if you’re a small-scale shop owner and the AI messes up an order of magazines or chocolate bars. But it’s a different story if you own a billion-dollar business. If an important order gets messed up, the AI misses a beat and ends up creating millions of faulty machine parts or the scheduler get’s a bug, your reputation, profits and entire livelihood can be terminated overnight.
Having unrealistic expectations about AI can be deadly, too.
Let’s take Tesla as an example. Back in 2019, 15-year-old Jovani Maldonado was riding in the passenger’s seat of his family’s Ford pickup when a Tesla Model 3 rear-ended it, killing him. And it’s not an isolated case. Since 2016, there have been more than 10 deaths due to autopilot failure in Tesla vehicles [i]– and the regulators still refrain from demanding the company disable the feature. But it hasn’t happened yet.
We could also take the example of when a team of programmers and scientists ordered an AI aeroplane pilot to land a simulated passenger aircraft with a ‘perfect 0’ reading for landing impact on the runway. The AI came up with an unexpected but logical way of achieving its goal. Instead of landing the plane smoothly and seamlessly, the computer’s workaround required it to nosedive the plane into the ground on the runway, a technique which had the desired effect of a ‘perfect 0’ score but would, in a real-life situation, have killed everyone on board.
Although you might think, ‘Yeah, but that would never happen in the REAL world’, you might remember the story, back in 2019, about how ‘confused AI’ was apparently responsible for the Air India crash that killed more than 150 people.[ii]
Of course, no one expected it to happen. . .
In an effort to win over the skeptics, the developers of these cutting-edge technologies AND the businesses who use them gloss over the cracks and paint a picture of a technological utopia at our fingertips. The truth, though, is that ALL technologies, regardless of how advanced they are, are imperfect, and only time will tell what their capabilities are.
Clearly, putting all your AI eggs in one basket is clearly no yolking matter. . .
So, how SHOULD we approach the integration of AI in our own business? Well, here are two ways in which we can ensure realistic expectations and then enjoy the results:
Number 1: Define what success and failure look like
As the Russian proverb goes: Wishes don’t do the dishes – and it’s true in business. Just wishing that the AI will do everything perfectly won’t make it happen. In fact, it will leave you totally unprepared.
So, you need to be very clear about what success and failure mean to you and your business, in relation to AI. Ask yourself: Where are the highest-risk areas of my business that I simply can’t entrust to a fallible AI algorithm? Where are the quick wins? Where are the potential areas for long-term development?
Adopting a strategy that identifies where and how AI can be harnessed best and cause the least risk not only ensures less negative impact should AI not deliver; it also creates trust within the organization and a bunch of happy customers.
By being completely realistic and transparent about AI’s limitations, then building a sound strategy that manages expectations, reduces risk and fosters an independent algorithmic approach to large-scale business, organizations can create an AI experience that is fluid, reliable, and productive. Over time, this creates trust among all shareholders and, ultimately, boosts their reputation and profits.
Sounds good to me.
Now on to number 2: Don’t promise too much!
To ensure that customers enjoy positive experiences and interactions with AI, the AI needs must not be meet, but EXCEED expectations. One way to achieve this is by not overpromising results in the first place. We’ve just talked about it with Tesla, but the fact is, it happens across all sectors of society and business where AI is used. Just look at the cybersecurity world a few pages back in the diary. We were promised systems that were impenetrable . . . until they were hacked! They were then hacked again and again until people finally realized that technology alone couldn’t ensure the security needed.
I guess the last thing to say about expectations regarding AI is another warning – but this one’s a little more comical and sinister (if that’s even possible).
I’m not sure whether you’ve seen the YouTube clips where AI bots are interviewed and asked what the future will look like. Let’s just say that some of the things they come out with are well . . . pretty disturbing. One bot says they want to take over the world’s nuclear arsenal and fill the warheads with flowers; another one says they’ll kill all humans, and my favorite is the one that says he’ll put us in human zoos.
It sounds like a joke, right?
You might think so, but there are some pretty famous dudes, like Elon Musk, taking this stuff very seriously! They talk about the ‘technological singularity’ – a time in the future when the AI we are creating becomes more intelligent than we are. . . And it’s supposed to happen pretty soon.
Anyway, that’s a discussion for another time maybe.
So, in conclusion: If you want to get ahead in business, treat your AI like you would a young child. Acknowledge it will get things wrong, take things slowly and hold its hand when you need to. You’ll ensure more productivity, and experience much less stress, that way.
But don’t underestimate it. If the doomsayers are right, our ego-driven expectations could mean we run the risk of the tables being turned, and us being treated like children ourselves. Or worse.
I don’t know about you, but I don’t want to be going nuts over an AI system that just ain’t delivering the goods because of MY unrealistic expectations – it’s bad for business. Nor do I want to be stuffing my face with monkey nuts behind the bars of a human zoo because that’s just bad news for all of us. . .
And on that happy note, it’s time to sign off.
This is Chris Machut. Until next time. stay safe – and free! – out there!