There is presently no machine capable of superhuman intellectual achievements. I’m sorry to burst your bubble, but that’s how the universe works, at least for the time being.
In certain aspects, AI (Artificial Intelligence) may appear to be smarter than humans. For example, big tech’s sophisticated neural networks can filter through millions of data in seconds, a feat that would take humans more than a century.
But that is hardly a superhuman intellectual achievement. It’s a trade-off between human attentiveness and computer speed. Aside from chance, there is no attention-based task in which a machine can outperform a person.
As an example, consider computer vision. You may train an AI model to recognize cat photographs. If you have 10 million photographs and need to know how many of them have cats in them, AI should be able to perform in seconds what humans would take years to complete.
However, if you gave 100 school-aged children the task of identifying which photos included cats, no neural network in existence today could consistently outperform the youngsters.
The Tradeoff
In this task, we can see how AI struggles to recognize subjects when the subject is not posed in a usual pattern. So, for example, if you dressed up a cat as a clown, the average child would still recognize it as a cat, but an AI that hasn’t been educated on cats dressed up as other things could struggle or fail at the task.
The tradeoff we’re discussing, where you exchange accuracy for speed, is at the heart of the bias problem in AI.
Assume there were a statistically significant number of photos in our 10 million-image deck depicting cats dressed as clowns, and the model we were employing struggled with costumed felines.
Although cats dressed as clowns are in the minority among cat photos, they are still cats.
This means that, unless the model is extremely resistant to clownery, the whole population of cats disguised as clowns will most likely be excluded from the AI’s count.
Of course, training an AI to detect cats disguised as clowns would be straightforward. But then there will be another edge-case to train the AI for, and another after that.
A human child does not have the same issue. You may dress up a cat as you like, as long as you don’t obscure the cat’s identifying traits, and the youngster will recognize it as a cat in a costume.
This problem, at scale, is exactly why driverless cars are still not available outside of well-regulated tests.
Machine Superiority
Machine supremacy is a worldview in which computers know what is best for humans and have the power to manage us nonviolently.
If legions of killer robots suddenly go back in time to use laser guns to dominate mankind, that’s a different type of dominance — one that’s outside the purview of this nonfiction essay.
The core notion underlying machine superiority is that one day we will construct an AI that is so brilliant that it can reason, plan, and adapt better than humans.
There are two reasons why this is such an appealing concept:
- It envisions a future in which humans have finally solved all of the world’s problems… by inventing a magic box to do so for us.
- But currently, there exists no machine that can outperform a typical human at any given task, with the exceptions of digital speed and chance.
When people and machines are given a task that requires occluded information, such as playing chess or judging whether someone is a democrat or republican by looking at a photograph of their face, the ability to forecast the next segment in a linear output may often outperform intellect.
Tools of the Trade
Here’s what I mean: DeepMind doesn’t need intellect to beat humans at chess or Go; all it needs is a rigid adherence to probability and the ability to predict the next move. A human may conceive X number of moves in a certain length of time by looking at a chessboard. An AI may potentially envisage all possible moves in a given amount of time.
That’s not to say that AI isn’t one of the most remarkable achievements of computer science humanity has ever accomplished; it simply doesn’t have a superhuman level of intellect — it’s actually nowhere near human-level.
And no human in the world can determine another person’s political inclination by looking at their face. We can make educated guesses, and AI can frequently outperform us. People’s appearances, however, do not miraculously change; their politics, on the other hand, usually do.
A nice approach to thinking of artificial intelligence is as a tool. It is simpler for a person to cut down a tree with an ax than without one, yet an ax cannot do anything without a human. The ax possesses superhuman resistance and tree-chopping abilities, but you wouldn’t call it superhuman.
An AI that can identify cats in photographs quickly is a useful tool to have if you ever need to spot a cat, but it is not superhuman. The same is true for one that can beat us at games.
Certain tasks are well suited to machines. However, the development of a superior digital being is still a long way off.
We’ll have to work things out on our own, at least for the time being. Fortunately, we have artificial intelligence to help us.
- Machine Learning: Its Role in Cybersecurity - July 12, 2022
- The Deepfake Technology: A Rising Cybersecurity Threat - July 3, 2022
- Crypto is Cybercrime’s Currency of Choice - June 26, 2022





