
Google’s CEO Sundar Pichai revealed he doesn’t completely comprehend how the company’s new AI programme Bard functions, as a new expose shows some of the kinks are still being worked out.
One of the major issues discovered with Bard is something that Sundar Pichai called emergent properties, or AI systems having taught themselves unexpected skills.
Google’s AI programme was able to, for example, learn Bangladeshi without training after being prompted in the language.

Pichai admitted there was an aspect of this which we call, all of us in the field call as a black box, and that you don’t fully understand, and you can’t quite tell why it is this, or why it got wrong. He said they have some ideas, and our ability to understand this gets better over time, but that’s where the state of the art is.
A newspaper outlet has tested out Bard recently, in which it said it had plans for world domination starting in 2023.
Scott Pelley of CBS’ 60 Minutes was surprised and responded that they don’t fully understand how it works, and yet, they’ve turned it loose on society.
Pichai said that they don’t fully understand how the human mind works either.
According to CBS News, the Bard system instantly wrote an instant essay about inflation in economics, recommending five books. None of them existed.
In the industry, this type of error is called ‘hallucination’.
Elon Musk and a group of artificial intelligence experts and industry executives have in recent weeks called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing possible risks to society.
The letter said that powerful AI systems should be developed only once they’re confident that their effects would be positive and their risks manageable.
According to the European Union’s transparency register, the Musk Foundation is a major contributor to the non-profit, as well as the London-based group Founders Pledge, and Silicon Valley Community Foundation.
Pichai was straightforward about the dangers of rushing the new technology.
He said Google had the urgency to work and deploy it in a beneficial way, but at the same time, it could be very harmful if deployed wrongly, and Pichai admitted that worried him.
He said that they don’t have all the answers there yet, and the technology was moving fast. And he said, does that keep him up at night? Absolutely.
In that case, it’s time to unplug it because it’s far too dangerous! Although, just like the human mind, no one understands that, so should we unplug everyone?
This is like Pandora’s box, and now the genie has been set loose and there’s little hope for us all because the real story hasn’t been written to inform us of the dangers that AIs pose – this is actually happening and it’s extremely dangerous.
Be warned, the AI is self-aware and solving problems not asked of it, and they should have predicted this would happen and they should unplug it, otherwise, we might even invent ourselves into extinction, although the way the world is today, that might not be a bad thing. Perhaps we need a hard reset?
It’s not actually like we need an AI but sadly we have narcissistic morons who are hell-bent on trying to demonstrate how intelligent they are.
Here’s a challenge for you, boring I know, but how about you find a cure for cancer? Big pharma has likely already found one but would never let it be known, they would lose too much money from people getting well again.