Why worry about artificial intelligence?

Profiles in Norwegian Science


Photo: Shutterstock
An AI robot head is depicted in pixels, but can this head compare with a human head in any way?

Ilan Kelman
Agder, Norway

Decades before smartphones became commonplace, the eternally philosophical and prescient comic strip Calvin and Hobbes proffered:

Calvin: I read that scientists are trying to make computers that think. Isn’t that weird?? If computers can think, what will people be better at than machines?

Hobbes: Irrational behavior.

Calvin: Maybe they’ll invent a psychotic computer.

Debates about machines taking over humanity, perhaps even causing our extinction, is nothing new.

In Charlie Chaplin’s 1936 movie Modern Times, his Little Tramp character struggles with life in the mechanized world. The 19th-century Luddites vandalized and wrecked labor-saving machinery. Shall we extrapolate back to opposition against the new-fangled scourge of using natural energy to cook, heat, and light: The “Anti-Firests.”

Healthy and unhealthy fears of technological advancements are, ironically, part of being human. It is not just our subservience to technology, but also us becoming technology.

From Philip K. Dick’s novel Do Androids Dream of Electric Sheep (1968), the inspiration for the movie Blade Runner (1982), to Isaac Asimov’s story The Bicentennial Man (1976), we have long blurred the theorized line between human beings and intelligent robots. Discussions, hopes, and fears transcend our bodies and jobs into our minds and personalities.

The research laboratory OpenAI unleashed the Artificial Intelligence (AI) program ChatGPT on Nov, 30, 2022. Ask it a question and it will assimilate the wealth of online human knowledge (and lack thereof) to provide an apparent answer.

As always with the strange species of human beings, reactions vary. Doomsayers espouse the death and dearth of creativity. Student assignments, job applications, and media articles will now be produced artificially from a short human prompt. When advocates disagree with a fact or someone else’s stance, offending text is disparaged as coming from an AI.

Proponents are sanguine or actively supportive. In the same way that people toss around and collate ideas, bouncing off each other to improve collectively, AIs offer one more input into idea innovation. It is up to us to balance “many hands make light work” with “too many cooks spoil the broth.” Navigating the endless supply of ostensibly credible web information vying with the endless supply of web misinformation and disinformation, an AI can direct us to the most accurate conclusions and sources.

Yet, it so far cannot substitute for human filtering. ChatGPT quickly became notorious for fabricating references. Safeguards against advising dangerous actions and promoting racist and other harmful ideologies were no match for people’s ingenuity in discovering loopholes alongside ways of goading ChatGPT into inappropriate statements—which, incidentally, is exactly what we do with each other. Just witness the best reporters interviewing the worst politicians.

So why all the recent kerfuffle? Two months before ChatGPT began its rise to fame/infamy, a human being won an art competition with the AI Midjourney that produces images based on prompts from humans. For years before, internet search engines, search engine optimization, and automated translators were using AI. We have long been willingly slavish to the intelligence of these online offerings—or, more to the point, their programmers.

Spellcheckers, grammar checkers, voice-to-text applications, and other writing tools have been integral to writing on a computer for decades. Advocates explain how they make us better communicators, connecting across languages (notwithstanding unfortunate mistranslations), supporting people with disabilities, and reducing the rate of basic errors, even while imposing standardization with which many disagree.

Not that the goods and bads are new. The first AI computer program is generally accepted as appearing in the 1950s. Fundamental questions, debated throughout human history and subject to extensive fiction and philosophizing, remain: “What is thinking?” and “What is intelligence?”

A baseline is whether or not a human-built machine or program can then design, build, and implement a machine or program better—with more thinking and higher intelligence—than itself. Mickey Mouse, as the Sorcerer’s Apprentice in the movie Fantasia (1940), animated one broom to fetch water and inadvertently created an army of marching sticks.

Many of our efforts to ease our work and our lives mean developing technologies as tools. How each one is applied is for us to decide. We use, misuse, and abuse technologies according to human foibles. Could these tools, including AI, run away from us entirely? Imagine if each of Mickey’s brooms became better, smarter, stronger, and more autonomous than the one before it!

Many maintain that this situation is infeasible for AI. It apparently cannot improve forever since limits exist for its capability, just as computing allegedly has limits of power and speed. Computers are faster mechanically than the human brain for numerous tasks while AI certainly writes much faster than people. This does not mean that AI’s results are more accurate, more useful, or better in other ways, even if we could develop cross-cultural metrics for creativity, innovation, excitement, allure, interest, and more human emotions.

And so Calvin’s “psychotic computer” might depend on how “psychotic” is measured.

In 2015, a Swiss shopping bot that was created as an art installation ended up purchasing illegal drugs online. Human Swiss police confiscated the bot and the substances, ultimately deciding not to charge the bot or the artists. A few years later, airports in Japan starting deploying autonomous AI-based robots for security.

Moving beyond our smartphones controlling us now, when might we see an AI judge and AI jury determine the fate of an AI accused of murdering another AI?

This article was written and edited entirely by (purported) human beings, using internet-connected computers with word processor, publishing, and search engine programs.

This article originally appeared in the August 2023 issue of The Norwegian American.

Avatar photo

Ilan Kelman

Ilan Kelman is Professor of Disasters and Health at University College London, England, and Professor II at the University of Agder, Norway. His overall research interest is linking disasters and health, including the integration of climate change into disaster research and health research. Follow him at www.ilankelman.org and @ILANKELMAN on Twitter and Instagram.