There’s tremendous promise in artificial intelligence – harnessing the latest computing power and vast amounts of data to solve some of the world’s greatest problems from curing cancer to addressing climate change.
But the race to create a system that mimics and even surpasses human intelligence could unleash perils that are hard to fathom outside the realm of science fiction.
“It’s a nightmare in so many different ways,” says Roman Yampolskiy, a professor of computer science at the University of Louisville. “When you go beyond human capacity to super intelligence, it stops being a tool. It becomes an agent, an independent agent, an agent we don’t know how to control.”
The implications for such technology are complex and wide ranging. Yet scientists, ethicists, and legislators are just beginning to grapple with questions of how to harness AI’s powers before it disrupts everything from jobs and the economy to education and even human nature.
“Now we’re starting to see these generative AI systems be able to create things, to make things,” says Jason Thacker, an assistant professor of philosophy and ethics at Boyce College at Louisville’s Southern Baptist Theological Seminary. “I think that’s fundamentally challenging a lot of the assumptions of what we thought it meant to be human.”
A Disruptive Technology Unlike Any Other
Rudimentary forms of artificial intelligence have been around since the 1950s, but its functions were fairly limited to activities like playing a game of chess. But in the last few years, the technology has evolved into programs that can mimic more useful functions like answering complicated questions or writing essays based on prompts provided by users. Yampolskiy says the current AI systems are starting to possess what he calls general intelligence. But in in a few years, he predicts AI will have super intelligence, which could enable it to make independent choices and discoveries.
“The goal is to create an equivalent of human capability both cognitive and maybe physical through robotics,” says Yampolskiy. “That’s where the real fun begins.”
Humans have experienced numerous disruptive technologies throughout history. Think of the social and economic changes brought on by the printing press, electricity, and the internal combustion engine. But the prospect of a technology gaining a “life” of its own could present entirely new opportunities – and threats – for humans.
“We are on the precipice of something that is straight out of a science fiction novel,” says state Rep. Josh Bray (R-Mount Sterling). “It’s really exciting but at the same time it’s a little bit scary.”
Bray points to Chat GPT, a free AI tool that can converse with users, answer questions, and compose essays. He says it has already shown it can pass bar exams and medical board tests. He says advancements in AI are happening so fast, it’s hard for policymakers and the public to keep up.
Because of the rapid evolution and vast potential of AI, Rep. Nima Kulkarni (D-Louisville), says it’s important to start implementing guardrails for it. Beyond the accuracy of answers generated by current AI programs, Kulkarni says she worries about preventing the proliferation of digitally altered media (called deep fakes). She also wants rules about who owns all the data consumed and generated by AI, and who could be held liable for problems generated by the technology.
“How we treat (AI) and the policies that we put into the place right now are how we will control and regulate its growth,” says Kulkarni.
That will include disruptions to the workforce. In time, AI could conduct scientific research, fulfill any writing tasks, create works of art and literature, or, with the aid of robotics, completely automate assembly lines and service industries.
“Long-term, all jobs will be replaced,” says Yampolskiy. “If you have a super intelligence system, there’s nothing you can contribute.”
The Prospect for Driverless Cars in Kentucky
Earlier this year Frankfort lawmakers debated legislation on one prominent form of AI technology: autonomous vehicles. Rep. Bray’s House Bill 135 would have created a framework for operating fully driverless cars and trucks on the state’s highways.
“When you start trying to talk about autonomous vehicles with people, they kind of look at you like you’ve got three heads because who can imagine a car driving itself,” says Bray.
The legislation passed the 2023 General Assembly but then was vetoed by Gov. Andy Beshear after the session concluded. In his veto statement, the governor said the bill didn’t adequately address safety and security issues around the use of these vehicles.
Kulkarni, who voted against HB 135, says the technology does offer great opportunities, but she says there are still numerous questions about oversight and regulation, liability concerns, and how driverless cars and trucks might function on narrow, curvy roads in eastern Kentucky during inclement weather.
“I don’t know that we as policymakers have enough of an understanding or enough information... on this technology to understand how to use it safely,” says Kulkarni.
Bray says autonomous vehicles are safer than human drivers, and the technology can handle bad weather and emergency situations. He says driverless vehicles are crucial for the trucking industry, which has struggled to fill open jobs in Kentucky and nationwide. He also argues that for every job an autonomous vehicle might displace, eight new jobs will be created to program and maintain driverless fleets.
“The technology is there, it’s absolutely safe, and we will be filing [the bill] again at the first part of next session,” says Bray.
What AI Could Mean for Education
Today’s general AI programs have already created headaches for educators. For example, how can teachers know if a student completed their own homework, or if the child simply copied answers generated by a free online chatbot like Chat GPT?
But AI won’t just help students take shortcuts. Fayette County elementary school teacher Donnie Piercey says educators could AI to generate assignments and then grade them. Properly used, he says AI could save teachers time that they could then devote to getting to know their students better and providing them with more personalized instruction. He says AI will only get more sophisticated, so teachers should harness it to improve the education profession and the lives of young people.
“It’s very, very important that we start to model for our students the ways that we can use this in positive ways outside of just using it to answer quick questions for us,” says Piercey, who was the 2021 Kentucky Teacher of the Year.
Colleges also are grappling with the implications of AI from student admissions through to instruction, says Trey Conatser, the director of the Center for the Enhancement of Teaching and Learning at the University of Kentucky.
“Our goal is to think about how we can leverage this technology in a way that enhances some of the goals that we have as a university… in terms of prioritizing things like student learning, the research enterprise, the service mission, and health care,” says Conatser.
To get the best results from AI, he says people will need to develop new skills to navigate the technology and critically evaluate the answers it provides. Conatser, who has a PhD in English, also wonders what we should call the output that AI creates. If it generates a novel, a painting, or a musical composition, is that actually art?
The existential questions will likely grow thornier as the technology evolves. If so much of our personal identities as humans are built around our occupations, what will it mean if some super-intelligent form of AI supplants us? How will we define ourselves, and humans as a species then?
For the generation coming of age in this brave new AI world, Piercey says today’s youth need to know not only how to safely use the technology, but also how to unplug from it and engage with human-generated creativity.
“If someone wrote this book for real or is able to play an instrument on the stage in a solo, I want to start to celebrate those kind of things all the more,” says Piercey.