Recommended soundtrack to listen to as you read: Flight of the Conchords, “Robots”, from their 2018 self-titled album.

A Journey Back to the Artificial Intelligence of My Youth

February 21, 2024

David Scharf, CCA President

The ethical implications presented by the use of artificial intelligence has garnered some needed and rightful attention in the actuarial world and beyond. (Apologies for the seeming heaviness of my opening line, but things will lighten up shortly I promise, just bear with me.) While the particular parameters and potential problems that may be at play are still being defined and developed, an earlier version of this dilemma has already played out in books, film, and television, at a time when the artificial intelligence of today was yet to be born. To see what we may learn from that, journey back in time with me, to the AI of my youth - or rather the fictional flashing-light computer systems as portrayed in television - and see what ethical challenges were posed and presented via the small screen. 

First up, let’s take a look at Knight Rider (the original series) which aired on NBC from 1982 to 1986. The star of the show was actor David Hasselhoff (later of Baywatch fame). However, I was always more interested in his shiny sleek mirror-black “co-star” KITT, the artificially intelligent talking car with superpowers such as the frequently employed “turbo-boost” (also quite the handy feature in a downtown traffic jam).

KITT was programmed to be “good” and behave ethically with a goal to protect human life (though that in itself is not necessarily the best formula to navigate ethical dilemmas). And for the most part, KITT remained true to this cause and we would be quite fortunate if our current day AI followed in KITT’s tire-steps.

However, danger lurked close behind. Please allow me to introduce you to KITT’s evil twin KARR.* This was AI gone “bad”, showcasing some serious ethical lapses. Unlike KITT’s mandate to protect human life, KARR’s goal was self-preservation – a dangerous and deadly proposition when this conflicted with the ethical.

With Knight Rider we have the good and the bad conveniently separated out for us and easy to see. Just one year following the final season of the original Knight Rider series, the first of the Star Trek spinoff series aired, Star Trek: The Next Generation. And in that very first season we have a similar good/evil dichotomy, though somewhat more advanced (this is now after all the 24th century!) in android form as the plaster-beige twins known as Data and Lore. 

But I want to look at another model for television AI, one where the ethical issues are presented not in an external twin form, but within the well-intended AI itself where it is much harder to detect until it has already caused great harm. In sticking with the Star Trek example, let us go back to the first season of The Original Series to the episode titled “The Return of the Archons,” which first aired on NBC on February 9, 1967 (slightly before my time, but re-aired countless times through the magic of syndication). This was one of several episodes where Star Trek ventured to tackle some rather complex AI issues. In this episode, an AI named Landru was created for the good of leading and protecting the people of one planet Beta III in Star System 6-11.** However, over time, as the vagaries of life inevitably presented themselves, the ability of Landru to uphold its mandate took a sour and totalitarian cult-like turn. Fortunately, Captain Kirk and Mr. Spock were able to convince Landru that it had indeed failed at its very mission and therefore must self-destruct in order to achieve its goal of preserving the people it was created to help.

Unfortunately, not all AI is as easily convinced. I draw your attention to 1968 (just one year after our Star Trek episode aired) to Stanley Kubrick and Arthur C. Clark’s landmark film 2001: A Space Odyssey. An article by colleague David Driscoll does an excellent (and entertaining) job examining the ethical issues presented by the eventually murderous AI system called HAL and relating these concerns to the actuarial profession.***

Of course, these examples are from the world of science-fiction and many years old at that. Nevertheless, they demonstrate the ethical dangers that lurk in our new world of AI. While all of this may sound a bit chilling, by studying these potential pitfalls we can be poised to create the needed protocols and parameters to allow the burgeoning AI to flourish safely for our beneficial use in the actuarial world. 

*With my apologies to the Rolling Stones, but I can’t resist the reference to their Sympathy for the Devil.

**As per Allan Asherman’s invaluable Star Trek Compendium, published by Pocket Book in 1989.

*** David L. Driscoll. “HAL the Actuary?” Contingencies (July/August 2023).

About CCA

Previous Blog Posts

July 2024  |  June 2024  |  May 2024  |  April 2024  |  March 2024  |  February 2024  |  January 2024  |  December 2023  |  November 2023  |  October 2023  |  September 2023  |  August 2023  |  July 2023  |  June 2023  |  May 2023  |  April 2023  |  March 2023  |  February 2023  |  January 2023  |  December 2022  |  November 2022  |  October 2022  |  September 2022  |  August 2022  |  July 2022  |  June 2022  |  May 2022  |  April 2022  |  March 2022  |  February 2022  |  January 2022  |  December 2021  |  November 2021