The Human Cost in The Race for AI's Holy Grail
It's only human to get enamored by the new shiny thing. Only if we dare to look away will we start to see the human cost to advancing humanity.
The race to solve humanity’s biggest problems with AI is getting crowded, with Big Tech and Small Tech wanting a piece of the pie. But ahead of the pack is attention-grabbing OpenAI and its biggest investor, Microsoft.
OpenAI has been making the most noise too. The Elon Musk — Sam Altman saga, fueled by Sam’s double standards and Elon’s jealousy, surfaces the fundamental flaws of human thinking in their quest for the artificial.
Google has also been making a lot of noise lately in their attempt to reassure us that their various AI projects are not inherently biased, that the failures are due to poor execution. The recent Gemini AI disaster has brought to light the real-life consequences of letting AI run free, and ascertains the risks to unchecked control.
OpenAI’s ChatGPT and Google’s Gemini are impressive AI tools but of limited intelligence.
They are trained to complete tasks or actions as accurately as the quality of their human teachers. Futurists call this Artificial Narrow Intelligence (ANI) while some call it Weak AI or simply AI.
Their ultimate goal is to be the first to claim the Holy Grail of AI — Artificial General Intelligence (AGI) — when their narrow-brained generative-AI model will become a human-like independent thinking machine.
AGI is a subset of AI and is theoretically much more advanced than traditional AI. While AI relies on algorithms or pre-programmed rules to perform limited tasks within a specific context, AGI can solve problems on its own and learn to adapt to a range of contexts, similar to humans.1
But the path to the finish line in this race for AGI is one paved with many lawsuits.