As AI Gains Intelligence, Necessary Regulation Lags Behind

Share the knowledge

Will Young

“Artificial Intelligence (AI) is transforming the way we live and work, just like the internet did.” Claims like these have become common in the media and academic journals. Large language models (LLMs) are algorithms that form the basis of this type of AI. LLMs can generate content in the form of text, imagery, audio or code in response to natural language instructions. Earlier versions of LLMs were trained using curated data sets that were narrow in scope. They were released to specific user groups, and validation and refinement were done in controlled research environments.

However, new versions of LLMs like Stable Diffusion, ChatGPT, and DALL·E from Stability AI, and Microsoft-backed OpenAI have changed the game. These tools have been trained on data from unknown sources and released to the public without proper guardrails or user education on limitations and risks. They opened the floodgates for human experimentation and fascination.

People have used generative AI to do everything from answering questions and creating artwork to writing texts and coding. Microsoft made a $10 billion investment in OpenAI and integrated ChatGPT into Bing search. Google and Meta released their own LLM bots Bard and LLaMA. It was an exciting time for generative AI. However, disturbing reports of human interactions with LLM bots have begun to surface.

The risks of this technology have been known for a long time. LLMs are not intelligent. They are trained to learn contextual information from sequential data like text, video, or audio. They use this knowledge to determine the likeliest next data token. They don’t understand language, the instructions they’re given, the training data or the outputs they return. Depending on the information contained in the training data, generative AI can present truthful outputs alongside misinformation as well as appropriate and helpful materials mixed with dangerous and disturbing ones. All of this without the ability to detect or communicate which is which.

The consequences of unleashing this technology without guardrails or responsible conduct of use codices are profound. Good actors choose to follow an ethical path to building safe generative AI applications. Bad actors are gambling with the well-being of users and the integrity of AI and knowledge databases at scale. There are no notable consequences for these actors so far.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

The worst offenders in terms of insufficient safeguarding of the technology are the influential developers of LLM-based generative AI systems. They have justified dangerous LLM bot behavior outside the lab as necessary data points for debugging AI models. This is not ethical AI. The field of generative AI should not regulate itself. Some creators of the technology have called for its regulation. Ethical frameworks for the responsible use of LLMs in health and medicine have been proposed, and regulators in the EU and US have started looking into the regulation of LLMs.

In conclusion, AI is transforming the world we live in, but there are risks that come with it. LLMs have been around for a while, but the recent release of generative AI tools like Stable Diffusion, ChatGPT, and DALL·E has brought the risks associated with AI to the forefront. The consequences of releasing this technology without proper guardrails or responsible conduct of use codices are profound. The field of generative AI should not regulate itself, and the regulation of LLMs is necessary to ensure that AI does not become a danger to society.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

Get New Internet Patrol Articles by Email!


Share the knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.