The risks and rewards of artificial intelligence: Color Us Connected
Guy Trammell Jr. and Amy Miller
This column appears every other week in Foster’s Daily Democrat and the Tuskegee News. This week, Guy Trammell, an African American man from Tuskegee, Ala., and Amy Miller, a white woman from South Berwick, Maine, write about artificial intelligence.By Amy Miller
This may be the most depressing column I write. I’m giving readers fair warning.
A month or so ago, a friend suggested that an artificial intelligence (AI) app could write a press release just about as well as I could, which, by the way, is something I do in my day job. We entered some scanty information into the app, giving it a topic and directing it to write the release in the style of a governmental agency. Then poof, in seconds we had a well-crafted press release that looked like anything you might read in your weekly newspaper.
For days I worried not just about press releases, but also about computer-generated fact sheets, forged student essays and AI novels. Soon, my concerns expanded to the political implications of machines getting smarter – false articles on candidates, politicians and reality that could change the course of history.
But all of that became kid’s stuff after I began reading about what computers could do to our planet, to humanity.
The simplest definition I found for artificial intelligence is “the simulation of human intelligence processes by machines, especially computer systems.”
The problem is that this kind of intelligence can surpass humans in many ways, and in fact already has. A brilliant computer, directed by a crazy, evil or profoundly angry tyrant, could propel the end of humanity if it “wanted” to. One technology leader I read suggested that AI is an existential threat on the level of only two other threats – nuclear war and a deadly pandemic.
So what are the options for controlling these machines as they learn? It is widely believed that the intelligence of computers is growing exponentially, doubling or more each year. Some believe it is growing even faster than that. In other words, it is a runaway train.
Regulation seems like a paltry proposition given that the problem is global and many actors, even within our borders, ignore the rules. In general, government interference in the human desire for progress seems like a losing battle.
I asked for some thoughts from a younger, techier acquaintance, a guy who works for Meta, which owns Facebook. He didn’t disagree with much of what I said but was more sanguine than I. Think of all the good AI can do, he said. Think of the medical advances; think of how replacing workers means giving us precious time to socialize, exercise, garden or just think, not to mention sleep.
I suggested to my Meta friend that the only solution I’ve heard that made me think computers will not destroy the world was the notion that we must create a computer that has been taught to neutralize the “evil” computer.
That, he said, only adds to the problem we are trying to minimize, giving more power to machines rather than less.
We ended by agreeing on the main points, but with different levels of alarm.
By Guy Trammell Jr.
In the 1980s, Tuskegee veterinary professor Dr. Tsegaye Habtemariam created a digital horse’s heart for experimentation instead of using live animals.
At a recent Congressional subcommittee hearing, the chairman’s greeting sounded exactly like him, but the voice and content were created by ChatGPT, an artificial intelligence program that researched him and created the talk. AI simply collects information and uses it to perform tasks.
This spring, Tuskegee University held a meeting in Greenwood to discuss AI’s benefits. Its development has recently exploded with new capabilities not in existence a year ago. It can potentially save lives by modeling and forecasting global climate and weather. I use my cell phone AI to correct texts. Be My Eyes uses AI for the sight challenged. We recognize that with benefits come risks. I remember being told our cell numbers would be private, but I get unrequested business solicitations. It’s not what I expected.
The Center for AI Safety has warned: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Extinction is a tall order to tackle. However, Dan Hendrycks, the center’s research director, talks about “the risk of sacrificing safety for development.” Profit should not outshine safety!
Some AI weaponizing risks include creating counterfeit people that conduct internet scams; creating false information to gain profits or influence elections; producing genetically modified diseases; and generating predictions about a person’s behavior, causing their arrest by police.
AI needs safeguards including: a) testing by independent scientists, b) global safety score cards and monitoring, c) restrictions on areas of citizen privacy, d) developer liability for harm done, e) transparency on type of content used in software development, f) override-proof refusal by AI of detrimental requests, g) alerts that people are interacting with AI instead of a real person, h) labeling similar to nutrition labels (highest risk use) and description of what product does, i) a new form of global policing of AI with enforceable laws, j) mandatory consent and compensation when intellectual property is used, k) safety for research funding, l) barriers to AI self-initiation, m) limit to AI’s power, n) AI should understand harm, and o) AI apps in more languages than English.
Booker T. Washington taught us to acquire knowledge for our Head, skills for our Hand, and character for our Heart. As these developments progress, our character must be guided by moral and ethical responsibility so that we will do no harm!
Amy and Guy can be reached at [email protected]