Tech titans are rushing to stake their claim in the lucrative artificial intelligence chatbot market, but they are trading off all-important transparency in the process, writes Toby Walsh, chief scientist of the UNSW AI Institute at the University of New South Wales in Sydney, Australia. Expect more government regulation of AI such as the draft AI Act which the European Parliament passed on June 14, a major step towards shaping global standards.
European parliamentarians in committee overwhelmingly approved the draft EU Artificial Intelligence Act: The legislation is the first major effort anywhere in the world to regulate AI and could contribute to the setting of global standards (Credit: Mathieu Cugnot / Shutterstock.com)
With an overwhelming majority, the European Parliament in Strasbourg, France, on June 14 passed draft legislation to regulate artificial intelligence (AI), a major step in asserting global standards. Among other things, the EU’s AI Act would prohibit systems or applications that entail an “unacceptable level of risk” such as predictive policing tools or social scoring systems such as those used in China to profile and categorize individuals according to their behavior and socioeconomic status. The law would also put limits on “high-risk AI” such as programs that could influence voters or cause harm to people’s health.
In particular, the legislation would set guardrails on generative AI, requiring content created by chatbots such as ChatGPT to be labeled as such. AI models would have to publish summaries of copyrighted data used for training, a potential complication for systems that generate human-sounding speech from text online, frequently from copyrighted sources. With this far-reaching law, Europe has moved further ahead on AI regulation than any other region or country in the world. A final version of the law could be passed by the end of this year. There would be a grace period to allow companies to adapt.
AI chatbots are like buses: You will wait half an hour in the rain with none in sight, then three come along all at once. In March 2023, OpenAI released its newest chatbot, GPT-4. It is a name that sounds more like a rally car than an AI assistant but it heralds a new era in computing.
We are still working out what these chatbots can do. Some of it is magical. Writing a complaint letter to the council for an undeserved parking ticket. Or composing a poem for your colleague's 25th work anniversary. But some of it is more troublesome. Chatbots such as ChatGPT, or GPT-4, will, for example, make stuff up, confidently telling you truths, untruths and everything in between. The technical term for this, according to experts, is “hallucination”.
The goal is not to eliminate hallucination. How else will a chatbot write that poem if it cannot hallucinate? The aim is to prevent the chatbot from hallucinating things that are untrue, especially when they are offensive, illegal or dangerous.
At the same time that Microsoft, which has a commercial partnership with OpenAI, announced it was including ChatGPT into all of its software tools, it let go of one of its AI and Ethics teams. Transparency is a core principle at the heart of Microsoft’s responsible AI principles, yet Microsoft has been secretly using GPT-4 within the new Bing search for a few months.
Google, which had previously not released its chatbot LaMDA to the public due to concerns about possible inaccuracies, appears to have been goaded into action by Microsoft’s announcement that Bing search would use ChatGPT. Google’s Bard chatbot is the result of adding LaMDA to its popular search tool. Deciding to build the Bard chatbot proved an expensive decision for Google: A simple mistake in the Bard’s first demo wiped US$100 billion off the share price of Google's parent company, Alphabet.
OpenAI, the company behind ChatGPT, put out a technical report explaining GPT-4. OpenAI’s core mission is the responsible development of artificial general intelligence – AI that is as smart or smarter than a human. But the OpenAI technical report was more of a white paper, having no technical details about GPT-4 or its training data. OpenAI was without shame in its secrecy, blaming the commercial landscape first and safety second. AI researchers cannot understand the risks and capabilities of GPT-4 if they do not know what data it is trained on. The only open part of OpenAI now is the name.
There is a fast-opening chasm between what technology companies are disclosing and what their products can do that can only be closed by government action. If these organizations are going to be less transparent and act more recklessly, then it falls upon the government to act. Expect regulation.
We can look to other industry areas for how that regulation might look. In high-risk areas such as aviation or pharmacology, there are government bodies with significant powers to oversee new technologies. We can also look to Europe, whose forthcoming AI Act has a significant risk-based focus. A European Parliament committee passed the draft in May and the legislature approved on June 14. A final version of the Act will lilkely come up for a vote by the end of 2023. Whatever shape this and other regulation take, they are needed if the world is to secure the benefits of AI while avoiding the risks.
This article is published under Creative Commons with 360info.
Further reading:
Check out here for more research and analysis from Asian perspectives.