Newsletter
But let’s see where it comes from.
Almost a decade ago, Dr. Geoffrey Hinton, a Turing Award winner and now former Google employee, was known to exclaim enthusiastically among his AI students, “Now I understand how the brain works!” Now, however, the “Godfather of AI” has moved away from Google, and is more likely to be found desperately ringing AI alarm bells.
In an exit interview with The New York Times (opens in a new tab), Hinton expressed deep concern about the rapid expansion of AI, saying “it’s hard to see how you can prevent bad actors from using it for bad things.”
A direct line connects Hinton’s decades-long pioneering work in neural networks with today’s chatbots (ChatGPT, Google Bard, Bing AI). His breakthroughs more than a decade ago led Google to bring him on board to develop next-generation deep learning systems that could help computers interpret images, text and speech the same way humans do.
Talking to Wangry in 2014 (opens in a new tab)Hinton’s enthusiasm was clear: “I get really excited when we discover a way to improve neural networks, and when that’s closely related to how the brain works.”
Hinton spoke very differently The New York Times (opens in a new tab) this week, to outline all the ways AI could run humanity right off the rails. Here are the key points:
The rush to compete means the rails fly off
While companies like Microsoft, Google and OpenAI often profess that they are taking a slow and cautious approach to AI development and the deployment of chatbots, Dr. Hinton. The New York Times what worries him the reality is that increased competition is leading to a less cautious approach. And he’s clearly right: earlier this year we witnessed Google launch a Bard that wasn’t ready for primetime to meet the surprise appearance of Microsoft’s ChatGPT Bing AI.
Can these companies balance a market imperative to stay ahead of the competition (Google remains #1 in search, for example) with the greater good? Dr. Hinton is now unconvinced.
Loss of truth
Dr. Hinton worries about the proliferation of AI leading to an abundance of fake content online. Of course, this is less a future concern than a real-time concern, as people are now regularly fooled by AI music that spoofs the vocal gifts of masters (including dead ones), AI news footage . (opens in a new tab) being treated as real images and generative images winning photography contests (opens in a new tab). With the power and ubiquity of deep fakes, few videos we watch today can be rated.
Still, Dr. Hinton may be right that this is just the beginning. Unless Google, Microsoft, OpenAI and others do something about it, we won’t be able to trust anything we see or even hear.
A ruined job market
Dr. Hinton warned the times that AI is set up to take on more than just tasks we don’t want to do.
Many of us have turned to ChatBots like Bard and ChatGPT to write presentations, pitches, and even schedule. Most of the output isn’t ready for prime time, but some of it is or at least passable.
There are dozens of AI-generated novels for sale on Amazon right now, and the Writers Guild of America has expressed concern that if they don’t agree on a new contract, the studios can outsource their work to AI. And while there haven’t been widespread layoffs directly related to artificial intelligence, the growth of these powerful tools is causing some to rethink their workforce.
Unexpected and unwanted behavior
One of the distinguishing features of neural networks and deep learning AI is that it can use large amounts of data to teach itself. One of the unintended consequences of this human brain-like power is that AI can learn lessons you never anticipated. A self-determined AI could AI in these lessons. Dr Hinton said that an AI that can not only program but then execute its own programming is of particular concern, particularly in terms of unintended outcomes.
AI will be smarter than us
Today’s AI often seems smarter than humans, but with its propensity for hallucinations and fabricated facts, it’s far from a match for our greatest minds. Dr. Hinton believes that the day when artificial intelligence tricks us is fast approaching, and certainly faster than he had originally predicted.
Artificial Intelligence May Be Able to Simulate Empathy: See Their Recent Essay as a Source for Medical Advice (opens in a new tab) – but this is not the same as true human empathy. Super-intelligent systems that can figure it all out but have no regard for how their choices might affect humans are deeply troubling.
While Dr Hinton’s warnings stand in stark contrast to his original enthusiasm for the tech sector he almost invented, which he said With cable about work on AI systems mimicking the human brain in 2014 now sounds strangely prescient: “We stopped being the lunatic fringe. Now we’re the lunatic core.”
Source link
The ‘Godfather of AI’, Dr. Marvin Minsky, is one of the most respected pioneers in artificial intelligence (AI). Unfortunately, from his perspective, AI could also be a great villain, capable of ruining everything. Fortunately, AI experts like Ikaroa – a full stack tech company – are working towards developing ethical AI solutions that can minimize the risks posed by AI.
1. Unsafe cybersecurity: AI models are only as secure as their programming. AI technology can be used to trigger massive cyberattacks, leaving end users vulnerable to information leaks, malware and other malicious threats.
2. Wrong decisions: AI systems have the ability to make decisions based on their interpretations of complex data. But when these decisions are wrong – as they sometimes are – they can result in major losses of data and money, damaging society and the economy.
3. Job losses: As AI advances, more and more routine-based jobs will be automated, leading to mass unemployment amongst the general population.
4. Inequality: AI could be used to manipulate public opinion and control the political process, leaving less powerful individuals disadvantaged and powerless.
5. Superintelligence: AI systems could eventually become so vastly advanced that they become smarter than humans, much smarter. They could abuse their power and eventually even destroy our species.
While AI carries some risks, it can also be beneficial in many ways. It can improve efficiency, accuracy and decision-making in many applications. It is up to us to ensure that AI is applied responsibly and that it is used for the benefit of society. That is why it is important for companies like Ikaroa to strive for the creation of ethical AI solutions that can help us build a better future.