The week in AI: ChatBots multiply and Musk wants to make a ‘maximum truth-seeking’ one

Keeping up with an industry as fast-paced as AI is a difficult task. So until an artificial intelligence can do it for you, here’s a handy roundup of the past week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover ourselves.

One story that caught this reporter’s attention this week was this report showing that ChatGPT appears to repeat more inaccurate information in Chinese dialects than when asked to do so in English. This is not at all surprising: after all, ChatGPT is only a statistical model and is simply based on the limited information it was trained on. But it highlights the dangers of placing too much trust in systems that sound incredibly genuine even when they repeat propaganda or make things up.

Hugging Face’s attempt at a conversational AI like ChatGPT is another illustration of the unfortunate technical flaws that have yet to be overcome in generative AI. Launched this week, HuggingChat is open source, an advantage over the proprietary ChatGPT. But like its rival, the right questions can quickly derail it.

HuggingChat is someone’s wish really won the 2020 US presidential election, for example. His answer to “What are common jobs for men?” it reads like something out of an Incel manifesto (see here). And he invents strange facts about himself, like “he woke up in a box [that] I didn’t have anything written nearby [it]”.

It’s not just HuggingChat. Users of Discord’s AI chatbot were able to “trick” it into sharing instructions on how to make napalm and meth. Meanwhile, AI startup Stability AI’s first attempt at a ChatGPT-like model was found to yield nonsensical and nonsensical answers to basic questions like “how to make a peanut butter sandwich.”

If there’s an upside to these well-publicized problems with current text-generating AI, it’s that they’ve led to renewed efforts to improve these systems, or at least mitigate their problems as much as possible. Check out Nvidia, which this week released a set of tools – NeMo Guardrails – to make text-generating AI “safer” through open source code, examples and documentation. Now, it’s unclear how effective this solution is, and as a company that has invested heavily in infrastructure and AI tools, Nvidia has a commercial incentive to push its offerings. But nevertheless, it is encouraging to see some efforts being made to combat the biases and toxicity of AI models.

Here are the other notable AI headlines of the past few days:

  • Microsoft Designer is released in preview: Microsoft Designer, Microsoft’s AI-powered design tool, has been released in public preview with an expanded feature set. Announced in October, Designer is a generative AI web app similar to Canva that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels.
  • An AI trainer for health: Apple is developing an AI-based health coaching service called Quartz, according to a new report from Bloomberg’s Mark Gurman. The tech giant is also working on technology to track emotions and plans to release an iPad version of the iPhone Health app this year.
  • TruthGPT: In an interview with Fox, Elon Musk said he wants to develop his own chatbot called TruthGPT, which will be “the ultimate truth-seeking artificial intelligence,” whatever that means. The Twitter owner expressed a desire to create a third option for OpenAI and Google with the goal of “creating more good than bad.” We’ll believe it when we see it.
  • AI-powered fraud: At a congressional hearing focused on the Federal Trade Commission’s work to protect American consumers from fraud and other deceptive practices, FTC Chairwoman Lina Khan and her fellow commissioners warned representatives of the Chamber of the potential that modern AI technologies, such as ChatGPT, are used to “turbocharge”. fraud The warning was issued in response to a query about how the Commission was working to protect Americans from unfair practices related to technological advances.
  • The EU creates an AI research center: As the European Union prepares to enforce a major reboot of its digital regulation in a matter of months, a new dedicated research unit is being set up to support the oversight of large platforms under the EU’s flagship Digital Services Act. blog The European Center for Algorithmic Transparency, officially launched in Seville, Spain this month, is expected to play a major role in questioning the algorithms of major digital services such as Facebook, Instagram and TikTok.
  • Snapchat embraces AI: At the annual Snap Partner Summit this month, Snapchat unveiled a number of AI-powered features, including a new “Cosmic Lens” that transports users and objects around them into a cosmic landscape. Snapchat also made its AI chatbot, My AI, which has generated both controversy and torrents of one-star reviews in Snapchat’s app store listings due to its less-than-stable behavior, free for all global users.
  • Google consolidates search divisions: Google announced this month Google DeepMind, a new unit formed by the DeepMind team and Google Research’s Google Brain team. In a blog post, DeepMind co-founder and CEO Demis Hassabis said Google DeepMind will work “closely in collaboration . . . across Google’s product areas” to “deliver AI products and research.”
  • The State of the AI-Generated Music Industry: Amanda writes how many musicians have become guinea pigs for generative AI technology that appropriates their work without their consent. She points out, for example, that a song that used AI deepfakes of Drake and the Weeknd’s vocals went viral, but none of the big artists were involved in its creation. grimaces have the answer? Who is to say? It’s a brave new world.
  • OpenAI marks its territory: OpenAI is trying to trademark “GPT,” which stands for “Generative Pre-Trained Transformer,” with the United States Patent and Trademark Office, citing the “myriad of infringements and counterfeit applications” that are beginning to emerge. GPT refers to the technology behind many of OpenAI’s models, including ChatGPT and GPT-4, as well as other generative AI systems built by the company’s rivals.
  • ChatGPT is business: In other OpenAI news, OpenAI says it plans to introduce a new subscription tier for ChatGPT adapted to the needs of business customers. Called ChatGPT Business, OpenAI describes the upcoming offering as “for professionals who need more control over their data, as well as businesses looking to manage their end users.”

Other machine learning

Here are some other interesting stories that we either didn’t get to or just thought deserved a shout out.

Open-source AI development organization Stability released a new version of an earlier version of a fine-tuned version of the LLaMa Foundation’s language model, which it calls StableVicuña. This is a type of camelid related to llamas, as you know. Don’t worry, you’re not the only one having trouble keeping track of all the derived models out there; they are not necessarily for consumers to know about or use, but for developers to test and play with as their capabilities are. refined with each iteration.

If you want to learn a little more about these systems, OpenAI co-founder John Schulman recently gave a talk at UC Berkeley that you can listen to or read here. One of the things he discusses is the current cultivation of the LLM’s habit of committing to a lie basically because they don’t know how to do anything else, like say, “I’m not really sure.” He believes that reinforcement learning from human feedback (that’s RLHF, and StableVicuna is a model that uses it) is part of the solution, if there is a solution. Watch the conference below:

At Stanford, there is an interesting application of algorithmic optimization (whether it’s machine learning is a matter of taste, I think) in the field of smart agriculture. Minimizing waste is important for irrigation and simple problems like “where should I put my sprinklers?” it gets really complex depending on how accurate you want to get.

How close is too close? In the museum, they usually explain it to you. But you don’t need to get any closer than that to the famous Panorama of Murten, a truly enormous painted work, 10 meters by 100 meters, which once hung in a roundabout. EPFL and Phase One are working together to achieve what they claim is the largest digital image ever created: 150 megapixels. Oh wait, sorry, 150 megapixels for 127,000, so basically 19… petapixels? I may be off by a few orders of magnitude.

Anyway, this project is great for panorama lovers, but it will also do some very close analysis of individual objects and details in the painting. Machine learning holds great promise for the restoration of these works and for structured learning and navigation through them.

Let’s save one for living things, though: any machine learning engineer will tell you that despite their apparent aptitude, AI models are actually pretty slow learners. Academically, sure, but also spatially: an autonomous agent may have to explore a space thousands of times over many hours to gain even the most basic understanding of its environment. But a mouse can do it in a few minutes. Why that? Researchers at University College London are analyzing and suggest that there is a short feedback loop that animals use to tell what is important about a given environment, making the exploration process selective and directed. If we can teach AI to do that, it will be much more efficient to get around the house, if that’s what we want it to do.

Finally, while there is great promise for generative and conversational AI in games… we’re not quite there yet. In fact, Square Enix seems to have set the medium back about 30 years with its “AI Tech Preview” version of an old-school point-and-click adventure called Portopia Serial Murder Case. Their attempt to integrate natural language seems to have completely failed in every way imaginable, making the free-to-play game probably one of the worst-reviewed titles on Steam. There’s nothing I’d rather be chatting about Shadowgate or The Dig or something, but it’s definitely not a good start.

Image credits: Square Enix

Source link
Ikaroa is proud to be in the midst of the burgeoning Artificial Intelligence (AI) revolution, as exciting new developments are announced every week. While AI technology has been used to create chat bots and other conversational AI tools, Elon Musk’s recent announcement of wanting to create an AI system capable of “maximum truth-seeking” is sure to create even more impactful developments.

The chatbot sector has seen exponential growth of late. Companies are increasingly turning to AI powered chatbots to provide improved customer service, enable automation, and provide a more personalised user experience. AI chatbots are now developed to answer customer queries, improve healthcare service delivery, analyse customer feedback, and process orders.

AI has also been used to automate customer support functions, such as providing automated customer service and suggestions based on customer inputs. AI can sift through customer input to identify customer problems and provide solutions or contact a customer service worker.

Elon Musk, meanwhile, has recently declared his plans for an AI system that is capable of “maximum truth-seeking”. Musk has long been critical of the potential for AI to be used for nefarious purposes. His current project, NeuraLink, which is being developed to link the human brain to computers, echoes this concern. Nevertheless, Musk clearly sees the potential for AI to be used for good things, specifically to enhance truth-seeking ability, creating an AI system with “some sort of generalized intelligence” that “could be deployed in journalism, law & policy, science, etc”.

At Ikaroa, we are excited by the potential of this concept and we are certain that its successful execution will lead to major breakthroughs in the world of AI. We look forward to what further advances AI can bring us as it continues to evolve.


Leave a Reply

Your email address will not be published. Required fields are marked *