The Biden administration today announced a new effort to address the risks surrounding generative artificial intelligence (AI), which has advanced at breakneck speed and set alarm bells ringing among industry experts.
Vice President Kamala Harris and other administration officials are scheduled to meet today with the CEOs of Google, Microsoft, OpenAI, the creator of the popular chatbot ChatGPT, as well as AI startup Anthropic. Administration officials plan to discuss the “fundamental responsibility” these companies have to ensure their AI products are safe and protect the privacy of American citizens as the technology becomes more powerful and capable of taking independent decisions.
“AI is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate its risks,” the White House said in a statement. “President Biden has been clear that when it comes to AI, we must put people and communities at the center by supporting responsible innovation that serves the public good while protecting our society, our security and the economy”.
This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation, but so far Congress has not advanced any legislation that would control AI. In October, the administration unveiled a plan for a so-called “AI Bill of Rights” as well as an AI risk management framework; most recently, he has pushed for a roadmap to create a national AI research resource.
The measures do not have any legal status; they’re just more guidance, studies and research “and they’re not what we need right now,” according to Avivah Litan, vice president and distinguished analyst at Gartner Research.
“We need clear guidelines on the development of safe, fair and responsible AI from US regulators,” he said. “We need meaningful regulation like what we see developing in the EU with the AI Act. While they are not doing everything perfectly at once, at least they are making progress and are willing to iterate. US regulators need to ‘intensify their game and their rhythm.’
In March, Senate Majority Leader Chuck Schumer, D-NY, announced plans for rules around generative AI as ChatGPT grew in popularity. Schumer called for greater transparency and accountability in AI technologies.
The United States has followed the rules of AI. Earlier this week, the European Union introduced the AI Act, proposed rules that would, among other things, require makers of generative AI tools to publish any copyrighted material used by platforms to create content. China has led the world in deploying various initiatives for AI governance, although most of these initiatives relate to citizen privacy and not necessarily security.
Included in the White House initiatives is a plan for the National Science Foundation to spend $140 million on creating seven new research centers dedicated to AI.
The administration also said it received “independent commitment from leading AI developers,” including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI, to participate in a public assessment of AI systems, in accordance with the principles of responsible disclosure: on an evaluation platform developed by Scale AI: in the AI Village at DEFCON 31.
“This will allow thousands of community partners and AI experts to thoroughly evaluate these models to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s plan for a Bill of Rights AI and an AI Risk Management Framework”. the White House said.
Tom Siebel, CEO of enterprise AI application provider C3 AI and founder of CRM software provider Siebel Systems, said this week that there is a case for allowing AI vendors to regulate their own products, but they are unlikely to be in a capitalist and competitive system. ready to control the technology.
“I’m afraid we don’t have a very good track record there — I mean, look at Facebook for more information,” Siebel told an audience at MIT Technology Review’s EmTech conference. “I would like to believe that self-regulation will work, but power corrupts and absolute power corrupts absolutely.”
The White House announcement comes after tens of thousands of technologists, scientists, educators and others put their names to a petition calling for OpenAI to halt for six months further development of ChatGPT, which currently runs on the language model major (LLM) GPT-4. algorithm
Technologists are alarmed by the rapid rise of AI from improving tasks such as online searches to being able to create realistic prose and code software from simple prompts, and creating videos and photographs, almost indistinguishable from real images.
Earlier this week, Geoffrey Hinton, known as “the godfather of AI” for his work in the space over the past 50 years or so, announced his resignation from Google as an engineering fellow there. Along with his resignation, he sent a letter to The New York Times about the existential threats posed by AI.
Speaking at the EmTech conference yesterday, Hinton laid out how dire the consequences are and how little can be done because industries and governments are already competing to win the AI war.
“It’s like some genetic engineers saying we’re going to improve grizzly bears; we’ve already upgraded them to 65 IQ, and now they can speak English, and they’re very useful for all sorts of things. But we think we can improve the IQ to 210,” Hinton told an audience of about 400 at the school.
AI can be self-learning and become exponentially smarter over time. Eventually, instead of needing human prompting, it will begin to think for itself. Once that happens, little can be done to stop what Hinton believes is inevitable: the extinction of humans.
“These things they will have learned from us by reading all the novels that never go anywhere and everything that Machiavelli wrote. [about] how to manipulate people,” he said. “And if they are much smarter than us, they will be very good at manipulating us. You will not realize what is going on. You will be like a two-year-old who is asked: ” Do you want the peas or the cauliflower’ and doesn’t realize that you don’t have to have either. And you’ll be so easy to manipulate.”
Hinton said his “only hope” is that competing governments, such as the US and China, can agree that allowing AI to have free reign is bad for everyone. “We’re all in the same boat when it comes to the existential threat, so we should all be able to cooperate to try to stop it,” Hinton said.
Others at the MIT event agreed. Siebel described AI as more powerful and dangerous than the invention of the steam engine, which led to the Industrial Revolution.
AI, Siebel said, will soon be able to undetectably mimic any type of content already created by humans — news, photos, videos — and when that happens, there will be no easy way to determine what’s real and what’s fake.
“And the damaging consequences of that are terrifying. It makes an Orwellian future look like the Garden of Eden compared to what’s capable of happening here,” Siebel said. “It can be very difficult to carry out a free and open democratic society. This must be discussed. It must be discussed in the academy. It must be discussed in the government.”
Margaret Mitchell, chief ethics scientist at machine learning app provider Hugging Face, said generative AI apps such as ChatGPT can be developed for positive uses, but any powerful technology can also be used for malicious purposes.
“That’s called dual use,” he said. “I don’t know if there’s a way to make any sort of guarantee that whatever technology you put in there won’t be dual-use.”
Regina Sam Penti, a partner at the international law firm Ropes & Gray LLP, told attendees at the MIT conference that the companies that create generative AI and the organizations that buy and use the products have legal responsibility. But most of the demands so far have been directed at Large Language Model (LLM) developers.
With generative AI, most of the issues center around data usage, according to Penti, because LLMs consume massive amounts of data and information “collected from all corners of the world.”
“So, effectively, if you’re building these systems, you’re likely to face some liability,” Penti said. “Especially if you use large amounts of data. And it doesn’t matter if you use the data yourself or get it from a vendor.”
Copyright © 2023 IDG Communications, Inc.
Source link
The White House recently unveiled an executive order ensuring the safety and privacy of Artificial Intelligence (AI) technology, hailed as a major step forward for the use and development of AI worldwide. The order tasks the US government with creating rules and regulations to guide the development and use of AI globally, as well as offers advice for industry players on issues of privacy and safety.
The White House’s executive order focuses largely on Trustworthy AI (T-AI), an international initiative that seeks to ensure our technology does not exceed its goals, but rather is ethically responsible, secure, transparent, and compatible with human rights. As part of the order, standards and certification programs will also be developed to help ensure AI is used responsibly.
The executive order complements the National Strategy for AI, announced in 2019, as well as the AI Principles & Standards document released in 2020, which contains 47 proposed principles on AI safety and privacy.
Ikaroa, a full stack tech company, is uniquely positioned to play a role in this landmark development. By leveraging our expertise and deep knowledge of both AI and powerful computing, we are able to assist current developments in AI and help ensure a safe and secure future.
In addition to helping authorities form policies related to AI development, we will also provide insights and guidance on how these regulations should be applied. This will ensure our professional understanding is taken into account when decisions are made, meaning AI is used responsibly and safely.
The White House’s executive order on AI safety and privacy is encouraging news for the world. As a tech firm at the forefront of AI development and deployment, Ikaroa is proud to be playing a role in this historic shift that will significantly advance the safe and responsible use of artificial intelligence.