It shouldn’t have come as such a surprise. So many movies, so many stories… as far back as the Golem and even before, humans have been worried about a greater intelligence than theirs taking over, designating human importance as somewhat lower than paper clips. Even worse – an intelligence they themselves have created.

It shouldn’t have had to wait for college teachers to complain about students using ChatGPI to write their papers. A scheme of regulatory limitations and oversight could have well been placed in effect decades ago. But, of course, one tends only to put out fires that burn rather than fireproof one’s home in advance—this costs money.

Table of Contents

Roads leading from Rome ...

Artificial Intelligence (AI) is a rapidly advancing technology that has been transforming various industries in recent years. However, the development of AI has raised concerns about the potential risks and consequences it poses to society, especially in the areas of privacy, security, and ethics. As a result, there has been a growing interest in regulating AI to address these concerns and ensure that its development and use are safe and ethical.

One of the recent examples of AI regulation is Italy’s ban on the use of ChatGPT, an AI-based chatbot. The ban was imposed in response to concerns about the potential misuse of the technology, particularly in spreading misinformation. Alex Scroxton in Computer Weekly asks if the Italian government’s decision to ban ChatGPT was a “sober precaution” or an overreaction<.

Strangely, what seems to concern the Italian privacy commissar (the Garante per la Protezione dei Dati Personali, GPDP) is that the engine has no age restriction controls and does not always provide accurate responses. Given their due, the agency also indicated a recent breach that exposed private chat histories. The Italian regulator is demanding that OpenAI increase transparency into how ChatGPT processes data and enable nonusers to opt out of having their data processed (though if they’re non-users, how this will be implemented remains a mystery). Conversely, Fabio Chiusi, a researcher at the Digital Society School in Amsterdam, claims that the ban “risks curbing legitimate use cases and innovation.” Notwithstanding, OpenAI stood to face a €20m fine, and countries such as China, Iran, North Korea and Russia have forbidden its dissemination.

On the plus side, Italy’s response has sparked discussions about the need for effective AI regulation. Security Week reports that ChatGPT’s parent, OpenAI has offered remedies to resolve Italy’s concerns, including implementing measures to prevent the spread of misinformation.

On the other hand, the need for effective AI regulation is not limited to ChatGPT or Italy. Tech Monitor reports that the development of GPT-4, the next generation of OpenAI’s GPT language model, will require effective regulation to ensure that its development and use align with ethical and legal standards. The article quotes Foundation for Responsible Robotics president, Aimee van Wynsberghe, who argues that “AI systems must be designed with transparency, accountability, and human oversight built-in.”

... and to Brussels

The argument is heating up rapidly, and for good reason. Even the industry itself has realized it is playing with fire.

Whether due to guileless concern or actual fear, both Apple’s Steve Wozniak, Twitter’s Elon Musk (I’m kidding: Tesla’s Musk… in actuality, one of the FOUNDERS of OpenAI!!) and even OpenAI head Sam Altman have appended their names to an open letter calling for a freeze on the development of the AI technology (Large Language Models—LLMs) that enable ChatGPT and its brethren. The letter has been signed by over a thousand AI experts, researchers, and investors, including representatives from Amazon, DeepMind, Google, Meta, and Microsoft.

According to the letter, AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” And all this, just in time, alongside the release of the latest ChatGPT (v-4)—improved thanks to web access, the ability to analyze and comment on graphics and images, higher accuracy and cross-linguistic abilities, and more. Its developers even believe that it is more secure: “GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5 and we have established a robust system to monitor for abuse.”

It comes as no surprise, that AI regulation has become a political issue, with US President Joe Biden reportedly planning to prioritize AI regulation during his presidency. Cindy Cohn, executive director of the Electronic Frontier Foundation warns against “the potential for abuse’, calling for their ethical use, so that they cannot “harm people.” And, smelling the roses of more bureaucracy, the European Union is reportedly creating a task force to investigate ChatGPT’s compliance with the EU’s General Data Protection Regulation (GDPR).

Greg Noone of Tech Monitor believes that questions about regulation will finally come down to a question of Eu-like attempts to govern specific models and iterations versus a more effective attempt to abstract these upwards in a quest to formulate super-objectives: “establishing governing best practices for the use of AI and then leaving it up to sectoral regulators to apply them as they see fit…” After all, it seems that the experts themselves are aware of the dangers; it only remains to explain these to the clerks.

Sparks already smoldering

For all their good intentions, the regulators and government spokespeople have woken up too late. Checkpoint is already reporting stolen ChatGPT premium accounts being sold on the dark web, indicating, not only that the cyberthugs are already monetizing the new technology, but that they even seem to have a business plan in action—whether for writing convincing phishing pages, stoking geopolitical mayhem, or getting the bot to write their malicious code. Stolen accounts, for example, can yield personal information, corporate information, credit data, and more.

Once again, coming to the rescue only after the cinders have escaped the grille, OpenAI has launched a $20,000 bug bounty program to incentivize researchers to identify and report vulnerabilities in ChatGPT… reflecting OpenAI’s commitment to responsible AI development and the importance of collaboration between developers and researchers. Within days, the Bugcrowd bounty platform administrating OpenAI’s program had awarded 14 hunters with rewards.

And once again, new technologies require new channels of awareness. As in all forms of crime, the authorities will by nature be a step behind the perps—responding to new forms of theft and mayhem. If once, not getting mugged required we don’t walk down dark alleys with our wallets hanging out, today we need to take care in so many more alleys of activity:

  • Don’t click on unchecked links.

  • Back up your data.

  • Keep your passwords secured offline.

  • Think before you post.

  • Watch out for your kids and elders.

  • Install novoShield.

Share: